Test Report: Docker_macOS 18779

                    
                      c20b56ce109690ce92fd9e26e987f9b16f237ff0:2024-04-30:34278
                    
                

Test fail (22/201)

x
+
TestOffline (758.55s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-844000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-844000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m37.657059133s)

                                                
                                                
-- stdout --
	* [offline-docker-844000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-844000" primary control-plane node in "offline-docker-844000" cluster
	* Pulling base image v0.0.43-1714386659-18769 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-844000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 21:03:28.587311   16515 out.go:291] Setting OutFile to fd 1 ...
	I0430 21:03:28.587557   16515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 21:03:28.587562   16515 out.go:304] Setting ErrFile to fd 2...
	I0430 21:03:28.587566   16515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 21:03:28.587732   16515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 21:03:28.589300   16515 out.go:298] Setting JSON to false
	I0430 21:03:28.612408   16515 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7379,"bootTime":1714528829,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0430 21:03:28.612503   16515 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0430 21:03:28.634619   16515 out.go:177] * [offline-docker-844000] minikube v1.33.0 on Darwin 14.4.1
	I0430 21:03:28.676232   16515 out.go:177]   - MINIKUBE_LOCATION=18779
	I0430 21:03:28.676246   16515 notify.go:220] Checking for updates...
	I0430 21:03:28.718291   16515 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	I0430 21:03:28.760395   16515 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0430 21:03:28.781346   16515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0430 21:03:28.802151   16515 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	I0430 21:03:28.823341   16515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0430 21:03:28.844605   16515 driver.go:392] Setting default libvirt URI to qemu:///system
	I0430 21:03:28.899320   16515 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0430 21:03:28.899521   16515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 21:03:29.062639   16515 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:101 OomKillDisable:false NGoroutines:185 SystemTime:2024-05-01 04:03:29.018738342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12
-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 21:03:29.105705   16515 out.go:177] * Using the docker driver based on user configuration
	I0430 21:03:29.126509   16515 start.go:297] selected driver: docker
	I0430 21:03:29.126542   16515 start.go:901] validating driver "docker" against <nil>
	I0430 21:03:29.126550   16515 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0430 21:03:29.129472   16515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 21:03:29.236809   16515 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:101 OomKillDisable:false NGoroutines:185 SystemTime:2024-05-01 04:03:29.224692399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12
-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/
docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 21:03:29.236987   16515 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0430 21:03:29.237196   16515 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0430 21:03:29.258769   16515 out.go:177] * Using Docker Desktop driver with root privileges
	I0430 21:03:29.279958   16515 cni.go:84] Creating CNI manager for ""
	I0430 21:03:29.280000   16515 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0430 21:03:29.280010   16515 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0430 21:03:29.280104   16515 start.go:340] cluster config:
	{Name:offline-docker-844000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-844000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 21:03:29.301802   16515 out.go:177] * Starting "offline-docker-844000" primary control-plane node in "offline-docker-844000" cluster
	I0430 21:03:29.322942   16515 cache.go:121] Beginning downloading kic base image for docker with docker
	I0430 21:03:29.365536   16515 out.go:177] * Pulling base image v0.0.43-1714386659-18769 ...
	I0430 21:03:29.386540   16515 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 21:03:29.386559   16515 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon
	I0430 21:03:29.386579   16515 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0430 21:03:29.386590   16515 cache.go:56] Caching tarball of preloaded images
	I0430 21:03:29.386708   16515 preload.go:173] Found /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0430 21:03:29.386722   16515 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0430 21:03:29.387594   16515 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/offline-docker-844000/config.json ...
	I0430 21:03:29.387659   16515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/offline-docker-844000/config.json: {Name:mkc562e3a6af054290918efba51f7f8435df449b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0430 21:03:29.436488   16515 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon, skipping pull
	I0430 21:03:29.436510   16515 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e exists in daemon, skipping load
	I0430 21:03:29.436526   16515 cache.go:194] Successfully downloaded all kic artifacts
	I0430 21:03:29.436565   16515 start.go:360] acquireMachinesLock for offline-docker-844000: {Name:mk61390b354af4a26a91cf82f0824f32c95c8bdc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 21:03:29.436735   16515 start.go:364] duration metric: took 158.318µs to acquireMachinesLock for "offline-docker-844000"
	I0430 21:03:29.436763   16515 start.go:93] Provisioning new machine with config: &{Name:offline-docker-844000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:offline-docker-844000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0430 21:03:29.436832   16515 start.go:125] createHost starting for "" (driver="docker")
	I0430 21:03:29.458756   16515 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0430 21:03:29.459054   16515 start.go:159] libmachine.API.Create for "offline-docker-844000" (driver="docker")
	I0430 21:03:29.459098   16515 client.go:168] LocalClient.Create starting
	I0430 21:03:29.459296   16515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 21:03:29.459396   16515 main.go:141] libmachine: Decoding PEM data...
	I0430 21:03:29.459424   16515 main.go:141] libmachine: Parsing certificate...
	I0430 21:03:29.459549   16515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 21:03:29.459622   16515 main.go:141] libmachine: Decoding PEM data...
	I0430 21:03:29.459641   16515 main.go:141] libmachine: Parsing certificate...
	I0430 21:03:29.460787   16515 cli_runner.go:164] Run: docker network inspect offline-docker-844000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 21:03:29.530473   16515 cli_runner.go:211] docker network inspect offline-docker-844000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 21:03:29.530567   16515 network_create.go:281] running [docker network inspect offline-docker-844000] to gather additional debugging logs...
	I0430 21:03:29.530591   16515 cli_runner.go:164] Run: docker network inspect offline-docker-844000
	W0430 21:03:29.580188   16515 cli_runner.go:211] docker network inspect offline-docker-844000 returned with exit code 1
	I0430 21:03:29.580223   16515 network_create.go:284] error running [docker network inspect offline-docker-844000]: docker network inspect offline-docker-844000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-844000 not found
	I0430 21:03:29.580240   16515 network_create.go:286] output of [docker network inspect offline-docker-844000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-844000 not found
	
	** /stderr **
	I0430 21:03:29.580380   16515 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 21:03:29.679268   16515 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:03:29.680710   16515 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:03:29.681071   16515 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022a4020}
	I0430 21:03:29.681088   16515 network_create.go:124] attempt to create docker network offline-docker-844000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0430 21:03:29.681152   16515 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-844000 offline-docker-844000
	W0430 21:03:29.730038   16515 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-844000 offline-docker-844000 returned with exit code 1
	W0430 21:03:29.730073   16515 network_create.go:149] failed to create docker network offline-docker-844000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-844000 offline-docker-844000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0430 21:03:29.730090   16515 network_create.go:116] failed to create docker network offline-docker-844000 192.168.67.0/24, will retry: subnet is taken
	I0430 21:03:29.731667   16515 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:03:29.732028   16515 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022530c0}
	I0430 21:03:29.732039   16515 network_create.go:124] attempt to create docker network offline-docker-844000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0430 21:03:29.732115   16515 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-844000 offline-docker-844000
	I0430 21:03:29.818150   16515 network_create.go:108] docker network offline-docker-844000 192.168.76.0/24 created
	I0430 21:03:29.818187   16515 kic.go:121] calculated static IP "192.168.76.2" for the "offline-docker-844000" container
	I0430 21:03:29.818304   16515 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 21:03:29.868865   16515 cli_runner.go:164] Run: docker volume create offline-docker-844000 --label name.minikube.sigs.k8s.io=offline-docker-844000 --label created_by.minikube.sigs.k8s.io=true
	I0430 21:03:29.918406   16515 oci.go:103] Successfully created a docker volume offline-docker-844000
	I0430 21:03:29.918514   16515 cli_runner.go:164] Run: docker run --rm --name offline-docker-844000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-844000 --entrypoint /usr/bin/test -v offline-docker-844000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 21:03:30.518511   16515 oci.go:107] Successfully prepared a docker volume offline-docker-844000
	I0430 21:03:30.518549   16515 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 21:03:30.518569   16515 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 21:03:30.518675   16515 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-844000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir
	I0430 21:09:29.591490   16515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 21:09:29.591636   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:09:29.642776   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:09:29.642915   16515 retry.go:31] will retry after 251.508119ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:29.896781   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:09:29.947745   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:09:29.947853   16515 retry.go:31] will retry after 249.25424ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:30.199483   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:09:30.252788   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:09:30.252897   16515 retry.go:31] will retry after 553.916425ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:30.807628   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:09:30.859365   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:09:30.859467   16515 retry.go:31] will retry after 691.732478ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:31.551920   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:09:31.604262   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	W0430 21:09:31.604374   16515 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	
	W0430 21:09:31.604396   16515 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:31.604461   16515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 21:09:31.604529   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:09:31.652634   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:09:31.652724   16515 retry.go:31] will retry after 338.612223ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:31.993685   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:09:32.046265   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:09:32.046362   16515 retry.go:31] will retry after 509.081088ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:32.556322   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:09:32.607162   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:09:32.607255   16515 retry.go:31] will retry after 619.214986ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:33.228364   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:09:33.278795   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	W0430 21:09:33.278894   16515 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	
	W0430 21:09:33.278911   16515 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:33.278934   16515 start.go:128] duration metric: took 6m3.710579117s to createHost
	I0430 21:09:33.278960   16515 start.go:83] releasing machines lock for "offline-docker-844000", held for 6m3.710706657s
	W0430 21:09:33.278976   16515 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0430 21:09:33.279428   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:33.327002   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	I0430 21:09:33.327059   16515 delete.go:82] Unable to get host status for offline-docker-844000, assuming it has already been deleted: state: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	W0430 21:09:33.327129   16515 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0430 21:09:33.327139   16515 start.go:728] Will try again in 5 seconds ...
	I0430 21:09:38.329430   16515 start.go:360] acquireMachinesLock for offline-docker-844000: {Name:mk61390b354af4a26a91cf82f0824f32c95c8bdc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 21:09:38.330442   16515 start.go:364] duration metric: took 937.842µs to acquireMachinesLock for "offline-docker-844000"
	I0430 21:09:38.330504   16515 start.go:96] Skipping create...Using existing machine configuration
	I0430 21:09:38.330526   16515 fix.go:54] fixHost starting: 
	I0430 21:09:38.331070   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:38.382229   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	I0430 21:09:38.382275   16515 fix.go:112] recreateIfNeeded on offline-docker-844000: state= err=unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:38.382294   16515 fix.go:117] machineExists: false. err=machine does not exist
	I0430 21:09:38.404249   16515 out.go:177] * docker "offline-docker-844000" container is missing, will recreate.
	I0430 21:09:38.449953   16515 delete.go:124] DEMOLISHING offline-docker-844000 ...
	I0430 21:09:38.450125   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:38.499799   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	W0430 21:09:38.499871   16515 stop.go:83] unable to get state: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:38.499888   16515 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:38.500255   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:38.547694   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	I0430 21:09:38.547758   16515 delete.go:82] Unable to get host status for offline-docker-844000, assuming it has already been deleted: state: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:38.547834   16515 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-844000
	W0430 21:09:38.595226   16515 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-844000 returned with exit code 1
	I0430 21:09:38.595272   16515 kic.go:371] could not find the container offline-docker-844000 to remove it. will try anyways
	I0430 21:09:38.595350   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:38.643072   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	W0430 21:09:38.643133   16515 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:38.643226   16515 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-844000 /bin/bash -c "sudo init 0"
	W0430 21:09:38.690756   16515 cli_runner.go:211] docker exec --privileged -t offline-docker-844000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0430 21:09:38.690788   16515 oci.go:650] error shutdown offline-docker-844000: docker exec --privileged -t offline-docker-844000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:39.693155   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:39.744323   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	I0430 21:09:39.744370   16515 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:39.744384   16515 oci.go:664] temporary error: container offline-docker-844000 status is  but expect it to be exited
	I0430 21:09:39.744409   16515 retry.go:31] will retry after 348.75622ms: couldn't verify container is exited. %v: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:40.094691   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:40.146485   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	I0430 21:09:40.146536   16515 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:40.146549   16515 oci.go:664] temporary error: container offline-docker-844000 status is  but expect it to be exited
	I0430 21:09:40.146576   16515 retry.go:31] will retry after 805.991073ms: couldn't verify container is exited. %v: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:40.953010   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:41.002386   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	I0430 21:09:41.002442   16515 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:41.002452   16515 oci.go:664] temporary error: container offline-docker-844000 status is  but expect it to be exited
	I0430 21:09:41.002478   16515 retry.go:31] will retry after 1.611844166s: couldn't verify container is exited. %v: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:42.615149   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:42.667837   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	I0430 21:09:42.667884   16515 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:42.667894   16515 oci.go:664] temporary error: container offline-docker-844000 status is  but expect it to be exited
	I0430 21:09:42.667919   16515 retry.go:31] will retry after 2.20533516s: couldn't verify container is exited. %v: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:44.874557   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:44.926934   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	I0430 21:09:44.926985   16515 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:44.926998   16515 oci.go:664] temporary error: container offline-docker-844000 status is  but expect it to be exited
	I0430 21:09:44.927023   16515 retry.go:31] will retry after 1.850911908s: couldn't verify container is exited. %v: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:46.780394   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:46.831480   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	I0430 21:09:46.831530   16515 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:46.831540   16515 oci.go:664] temporary error: container offline-docker-844000 status is  but expect it to be exited
	I0430 21:09:46.831560   16515 retry.go:31] will retry after 2.197483697s: couldn't verify container is exited. %v: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:49.030650   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:49.081863   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	I0430 21:09:49.081912   16515 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:49.081923   16515 oci.go:664] temporary error: container offline-docker-844000 status is  but expect it to be exited
	I0430 21:09:49.081947   16515 retry.go:31] will retry after 4.652314514s: couldn't verify container is exited. %v: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:53.735069   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:53.791805   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	I0430 21:09:53.791851   16515 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:53.791861   16515 oci.go:664] temporary error: container offline-docker-844000 status is  but expect it to be exited
	I0430 21:09:53.791883   16515 retry.go:31] will retry after 4.419716279s: couldn't verify container is exited. %v: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:58.213275   16515 cli_runner.go:164] Run: docker container inspect offline-docker-844000 --format={{.State.Status}}
	W0430 21:09:58.265640   16515 cli_runner.go:211] docker container inspect offline-docker-844000 --format={{.State.Status}} returned with exit code 1
	I0430 21:09:58.265691   16515 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:09:58.265699   16515 oci.go:664] temporary error: container offline-docker-844000 status is  but expect it to be exited
	I0430 21:09:58.265728   16515 oci.go:88] couldn't shut down offline-docker-844000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	 
	I0430 21:09:58.265811   16515 cli_runner.go:164] Run: docker rm -f -v offline-docker-844000
	I0430 21:09:58.313931   16515 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-844000
	W0430 21:09:58.361353   16515 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-844000 returned with exit code 1
	I0430 21:09:58.361457   16515 cli_runner.go:164] Run: docker network inspect offline-docker-844000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 21:09:58.409963   16515 cli_runner.go:164] Run: docker network rm offline-docker-844000
	I0430 21:09:58.509163   16515 fix.go:124] Sleeping 1 second for extra luck!
	I0430 21:09:59.511382   16515 start.go:125] createHost starting for "" (driver="docker")
	I0430 21:09:59.533440   16515 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0430 21:09:59.533599   16515 start.go:159] libmachine.API.Create for "offline-docker-844000" (driver="docker")
	I0430 21:09:59.533619   16515 client.go:168] LocalClient.Create starting
	I0430 21:09:59.533794   16515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 21:09:59.533872   16515 main.go:141] libmachine: Decoding PEM data...
	I0430 21:09:59.533892   16515 main.go:141] libmachine: Parsing certificate...
	I0430 21:09:59.533951   16515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 21:09:59.534013   16515 main.go:141] libmachine: Decoding PEM data...
	I0430 21:09:59.534025   16515 main.go:141] libmachine: Parsing certificate...
	I0430 21:09:59.554578   16515 cli_runner.go:164] Run: docker network inspect offline-docker-844000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 21:09:59.604876   16515 cli_runner.go:211] docker network inspect offline-docker-844000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 21:09:59.604974   16515 network_create.go:281] running [docker network inspect offline-docker-844000] to gather additional debugging logs...
	I0430 21:09:59.604993   16515 cli_runner.go:164] Run: docker network inspect offline-docker-844000
	W0430 21:09:59.652510   16515 cli_runner.go:211] docker network inspect offline-docker-844000 returned with exit code 1
	I0430 21:09:59.652539   16515 network_create.go:284] error running [docker network inspect offline-docker-844000]: docker network inspect offline-docker-844000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-844000 not found
	I0430 21:09:59.652558   16515 network_create.go:286] output of [docker network inspect offline-docker-844000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-844000 not found
	
	** /stderr **
	I0430 21:09:59.652700   16515 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 21:09:59.702444   16515 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:09:59.703787   16515 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:09:59.705389   16515 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:09:59.706912   16515 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:09:59.708620   16515 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:09:59.709200   16515 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0007719f0}
	I0430 21:09:59.709216   16515 network_create.go:124] attempt to create docker network offline-docker-844000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0430 21:09:59.709317   16515 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-844000 offline-docker-844000
	I0430 21:09:59.794397   16515 network_create.go:108] docker network offline-docker-844000 192.168.94.0/24 created
	I0430 21:09:59.794556   16515 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-844000" container
	I0430 21:09:59.794658   16515 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 21:09:59.845223   16515 cli_runner.go:164] Run: docker volume create offline-docker-844000 --label name.minikube.sigs.k8s.io=offline-docker-844000 --label created_by.minikube.sigs.k8s.io=true
	I0430 21:09:59.893431   16515 oci.go:103] Successfully created a docker volume offline-docker-844000
	I0430 21:09:59.893557   16515 cli_runner.go:164] Run: docker run --rm --name offline-docker-844000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-844000 --entrypoint /usr/bin/test -v offline-docker-844000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 21:10:00.242174   16515 oci.go:107] Successfully prepared a docker volume offline-docker-844000
	I0430 21:10:00.242211   16515 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 21:10:00.242231   16515 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 21:10:00.242327   16515 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-844000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir
	I0430 21:15:59.536297   16515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 21:15:59.536427   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:15:59.587678   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:15:59.587792   16515 retry.go:31] will retry after 194.041881ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:15:59.783480   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:15:59.837146   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:15:59.837267   16515 retry.go:31] will retry after 223.865573ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:00.063479   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:00.132512   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:16:00.132629   16515 retry.go:31] will retry after 744.652021ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:00.879652   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:00.933121   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	W0430 21:16:00.933242   16515 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	
	W0430 21:16:00.933260   16515 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:00.933321   16515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 21:16:00.933378   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:00.982246   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:16:00.982348   16515 retry.go:31] will retry after 317.533961ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:01.302243   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:01.351460   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:16:01.351582   16515 retry.go:31] will retry after 368.495781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:01.720755   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:01.770773   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:16:01.770874   16515 retry.go:31] will retry after 349.062724ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:02.122316   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:02.171760   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:16:02.171864   16515 retry.go:31] will retry after 471.463076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:02.645753   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:02.696066   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	W0430 21:16:02.696179   16515 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	
	W0430 21:16:02.696202   16515 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:02.696225   16515 start.go:128] duration metric: took 6m3.183326372s to createHost
	I0430 21:16:02.696292   16515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 21:16:02.696342   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:02.743408   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:16:02.743503   16515 retry.go:31] will retry after 324.672886ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:03.070562   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:03.122932   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:16:03.123034   16515 retry.go:31] will retry after 205.403308ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:03.329192   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:03.380825   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:16:03.380925   16515 retry.go:31] will retry after 289.028862ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:03.671250   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:03.745064   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:16:03.745188   16515 retry.go:31] will retry after 947.654914ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:04.695251   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:04.747415   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	W0430 21:16:04.747527   16515 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	
	W0430 21:16:04.747547   16515 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:04.747608   16515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 21:16:04.747662   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:04.796078   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:16:04.796173   16515 retry.go:31] will retry after 365.000321ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:05.162863   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:05.213666   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:16:05.213763   16515 retry.go:31] will retry after 520.300221ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:05.735807   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:05.786510   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	I0430 21:16:05.786606   16515 retry.go:31] will retry after 324.611994ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:06.112021   16515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000
	W0430 21:16:06.162600   16515 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000 returned with exit code 1
	W0430 21:16:06.162700   16515 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	
	W0430 21:16:06.162718   16515 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-844000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-844000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000
	I0430 21:16:06.162732   16515 fix.go:56] duration metric: took 6m27.830657121s for fixHost
	I0430 21:16:06.162738   16515 start.go:83] releasing machines lock for "offline-docker-844000", held for 6m27.830707502s
	W0430 21:16:06.162814   16515 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-844000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-844000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0430 21:16:06.206384   16515 out.go:177] 
	W0430 21:16:06.228515   16515 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0430 21:16:06.228595   16515 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0430 21:16:06.228622   16515 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0430 21:16:06.250253   16515 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-844000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-04-30 21:16:06.329267 -0700 PDT m=+6334.617609001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-844000
helpers_test.go:235: (dbg) docker inspect offline-docker-844000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-844000",
	        "Id": "b97feffe35b55168952740fa14e1af257f08d1a82dfdc4410586882a5b654610",
	        "Created": "2024-05-01T04:09:59.755157785Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-844000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-844000 -n offline-docker-844000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-844000 -n offline-docker-844000: exit status 7 (112.34733ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 21:16:06.493103   17452 status.go:249] status error: host: state: unknown state "offline-docker-844000": docker container inspect offline-docker-844000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-844000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-844000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-844000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-844000
E0430 21:16:06.934087    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
--- FAIL: TestOffline (758.55s)

                                                
                                    
x
+
TestCertOptions (7201.375s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-587000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (1m51s)
	TestCertOptions (1m25s)
	TestNetworkPlugins (27m3s)

                                                
                                                
goroutine 2638 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 14 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000763380, 0xc000a97bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000038348, {0x7740fc0, 0x2a, 0x2a}, {0x3292aa5?, 0x4dc8e19?, 0x7763d80?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000634780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000634780)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00062eb80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 27 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 26
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 161 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000b00a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 152
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 568 [syscall]:
syscall.syscall6(0xc00283df80?, 0x1000000000010?, 0x10000000019?, 0x4f101480?, 0x90?, 0x807d108?, 0x90?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc00229f8a0?, 0x31d30a5?, 0x90?, 0x631c140?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x3303c45?, 0xc00229f8d4, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000aa05a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000ba3080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000ba3080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0002a71e0, 0xc000ba3080)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc0002a71e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc0002a71e0, 0x63af478)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2233 [chan receive, 28 minutes]:
testing.(*T).Run(0xc000b341a0, {0x4d6f8e7?, 0x5c5279e2f04?}, 0xc002198138)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000b341a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc000b341a0, 0x63af558)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 667 [IO wait, 111 minutes]:
internal/poll.runtime_pollWait(0x4eff71a8, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00062e000?, 0x3fe?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00062e000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00062e000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc002370120)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc002370120)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0008f40f0, {0x63d20f0, 0xc002370120})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0008f40f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0020f7040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 664
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 162 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a7e5c0, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 152
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 569 [syscall, 1 minutes]:
syscall.syscall6(0xc00283df80?, 0x1000000000010?, 0x10000000019?, 0x4f101480?, 0x90?, 0x807d108?, 0x90?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0021b1a40?, 0x31d30a5?, 0x90?, 0x631c140?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x3303c45?, 0xc0021b1a74, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000aa0390)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000ba2b00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000ba2b00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0002a7380, 0xc000ba2b00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc0002a7380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc0002a7380, 0x63af470)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1431 [select, 107 minutes]:
net/http.(*persistConn).readLoop(0xc00283b8c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1442
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2608 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x4eff6ae0, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002896360?, 0xc000b27298?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002896360, {0xc000b27298, 0x568, 0x568})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0007463c0, {0xc000b27298?, 0xc000625dc0?, 0x22e?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00283c5a0, {0x63ba178, 0xc0027bc0c8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x63ba2b8, 0xc00283c5a0}, {0x63ba178, 0xc0027bc0c8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000094678?, {0x63ba2b8, 0xc00283c5a0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000094738?, {0x63ba2b8?, 0xc00283c5a0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x63ba2b8, 0xc00283c5a0}, {0x63ba238, 0xc0007463c0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0027083c0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 569
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2312 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020f6ea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020f6ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0020f6ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc0020f6ea0, 0x63af5a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 165 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000a7e590, 0x2c)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x5ea93a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000b00900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a7e5c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0000698d0, {0x63bb760, 0xc000814a20}, 0x1, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000698d0, 0x3b9aca00, 0x0, 0x1, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 162
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 166 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x63df240, 0xc0000662a0}, 0xc00228f750, 0xc0021abf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x63df240, 0xc0000662a0}, 0x0?, 0xc00228f750, 0xc00228f798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x63df240?, 0xc0000662a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 162
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 167 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 166
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1381 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc0028a31e0, 0xc0025f3aa0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1380
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2340 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b35380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b35380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b35380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b35380, 0xc0009a6580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2625 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x4eff7398, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002896540?, 0xc0007f3600?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002896540, {0xc0007f3600, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000746410, {0xc0007f3600?, 0xc0004ca8c0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00283c5d0, {0x63ba178, 0xc0027bc0d0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x63ba2b8, 0xc00283c5d0}, {0x63ba178, 0xc0027bc0d0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000093e78?, {0x63ba2b8, 0xc00283c5d0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000093f38?, {0x63ba2b8?, 0xc00283c5d0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x63ba2b8, 0xc00283c5d0}, {0x63ba238, 0xc000746410}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002868780?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 569
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2303 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b34b60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b34b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b34b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b34b60, 0xc0009a6280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2337 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b34ea0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b34ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b34ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b34ea0, 0xc0009a6380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2635 [IO wait]:
internal/poll.runtime_pollWait(0x4eff72a0, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002896a80?, 0xc002898a8f?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002896a80, {0xc002898a8f, 0x571, 0x571})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc000746460, {0xc002898a8f?, 0xc00238ea80?, 0x225?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00283c870, {0x63ba178, 0xc0027bc070})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x63ba2b8, 0xc00283c870}, {0x63ba178, 0xc0027bc070}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc00228e678?, {0x63ba2b8, 0xc00283c870})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00228e738?, {0x63ba2b8?, 0xc00283c870?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x63ba2b8, 0xc00283c870}, {0x63ba238, 0xc000746460}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000531140?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 568
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2626 [select, 1 minutes]:
os/exec.(*Cmd).watchCtx(0xc000ba2b00, 0xc000530cc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 569
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2636 [IO wait]:
internal/poll.runtime_pollWait(0x4eff69e8, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002896b40?, 0xc0002b7c00?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002896b40, {0xc0002b7c00, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0007464c0, {0xc0002b7c00?, 0x330676d?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00283c8a0, {0x63ba178, 0xc0027bc078})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x63ba2b8, 0xc00283c8a0}, {0x63ba178, 0xc0027bc078}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x7675860?, {0x63ba2b8, 0xc00283c8a0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0x63ba2b8?, 0xc00283c8a0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x63ba2b8, 0xc00283c8a0}, {0x63ba238, 0xc0007464c0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002198138?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 568
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 918 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x63df240, 0xc0000662a0}, 0xc000b74f50, 0xc000b74f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x63df240, 0xc0000662a0}, 0x60?, 0xc000b74f50, 0xc000b74f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x63df240?, 0xc0000662a0?}, 0xc0002a61a0?, 0x3306900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000937d0?, 0x334cc04?, 0xc00242a360?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 902
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2300 [chan receive, 28 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000b34000, 0xc002198138)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2233
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1441 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc002851ce0, 0xc0028689c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 784
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2321 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020f7380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020f7380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0020f7380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc0020f7380, 0x63af580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2304 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b34d00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b34d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b34d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b34d00, 0xc0009a6300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2324 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020f7860)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020f7860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc0020f7860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc0020f7860, 0x63af538)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2234 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b34340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b34340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc000b34340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc000b34340, 0x63af560)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2323 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020f76c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020f76c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0020f76c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc0020f76c0, 0x63af520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1347 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc0027a9ce0, 0xc0025f3080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1346
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2338 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b35040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b35040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b35040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b35040, 0xc0009a6400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2235 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b34680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b34680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc000b34680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc000b34680, 0x63af570)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2341 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b35520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b35520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b35520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b35520, 0xc0009a6600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2339 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b351e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b351e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b351e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b351e0, 0xc0009a6480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2637 [select]:
os/exec.(*Cmd).watchCtx(0xc000ba3080, 0xc000531200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 568
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2301 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b344e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b344e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b344e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b344e0, 0xc0009a6000)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1918 [syscall, 93 minutes]:
syscall.syscall(0x0?, 0xc0028c25b8?, 0xc000aaeef0?, 0x3272f1d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc0028c2498?, 0xc0024ab880?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1940
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 2322 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020f7520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020f7520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0020f7520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc0020f7520, 0x63af5a8)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 901 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0027e5da0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 797
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1432 [select, 107 minutes]:
net/http.(*persistConn).writeLoop(0xc00283b8c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1442
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 902 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a7f840, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 797
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 917 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000a7f810, 0x2b)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x5ea93a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0027e5c80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a7f840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000486830, {0x63bb760, 0xc000b573b0}, 0x1, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000486830, 0x3b9aca00, 0x0, 0x1, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 902
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 919 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 918
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2302 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000b86730)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000b349c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000b349c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000b349c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000b349c0, 0xc0009a6180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1045 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc0026da580, 0xc0024ef500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1044
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                    
x
+
TestDockerFlags (756.16s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-781000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0430 21:16:41.571230    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 21:20:49.993430    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 21:21:06.935177    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 21:21:41.573128    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 21:26:06.937188    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 21:26:24.622147    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 21:26:41.573073    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-781000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m34.869303788s)

                                                
                                                
-- stdout --
	* [docker-flags-781000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-781000" primary control-plane node in "docker-flags-781000" cluster
	* Pulling base image v0.0.43-1714386659-18769 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-781000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 21:16:30.949886   17640 out.go:291] Setting OutFile to fd 1 ...
	I0430 21:16:30.950156   17640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 21:16:30.950161   17640 out.go:304] Setting ErrFile to fd 2...
	I0430 21:16:30.950165   17640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 21:16:30.950335   17640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 21:16:30.951821   17640 out.go:298] Setting JSON to false
	I0430 21:16:30.973790   17640 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":8161,"bootTime":1714528829,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0430 21:16:30.973879   17640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0430 21:16:30.995460   17640 out.go:177] * [docker-flags-781000] minikube v1.33.0 on Darwin 14.4.1
	I0430 21:16:31.063940   17640 out.go:177]   - MINIKUBE_LOCATION=18779
	I0430 21:16:31.041209   17640 notify.go:220] Checking for updates...
	I0430 21:16:31.124543   17640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	I0430 21:16:31.182970   17640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0430 21:16:31.226875   17640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0430 21:16:31.250018   17640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	I0430 21:16:31.271045   17640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0430 21:16:31.292684   17640 config.go:182] Loaded profile config "force-systemd-flag-742000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 21:16:31.292853   17640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0430 21:16:31.350385   17640 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0430 21:16:31.350588   17640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 21:16:31.457266   17640 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:117 OomKillDisable:false NGoroutines:235 SystemTime:2024-05-01 04:16:31.446809373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 21:16:31.479214   17640 out.go:177] * Using the docker driver based on user configuration
	I0430 21:16:31.501051   17640 start.go:297] selected driver: docker
	I0430 21:16:31.501087   17640 start.go:901] validating driver "docker" against <nil>
	I0430 21:16:31.501120   17640 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0430 21:16:31.505467   17640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 21:16:31.613720   17640 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:117 OomKillDisable:false NGoroutines:235 SystemTime:2024-05-01 04:16:31.603174941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 21:16:31.613910   17640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0430 21:16:31.614100   17640 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0430 21:16:31.635532   17640 out.go:177] * Using Docker Desktop driver with root privileges
	I0430 21:16:31.656815   17640 cni.go:84] Creating CNI manager for ""
	I0430 21:16:31.656859   17640 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0430 21:16:31.656876   17640 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0430 21:16:31.656992   17640 start.go:340] cluster config:
	{Name:docker-flags-781000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-781000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 21:16:31.678531   17640 out.go:177] * Starting "docker-flags-781000" primary control-plane node in "docker-flags-781000" cluster
	I0430 21:16:31.720637   17640 cache.go:121] Beginning downloading kic base image for docker with docker
	I0430 21:16:31.743694   17640 out.go:177] * Pulling base image v0.0.43-1714386659-18769 ...
	I0430 21:16:31.785649   17640 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 21:16:31.785682   17640 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon
	I0430 21:16:31.785702   17640 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0430 21:16:31.785726   17640 cache.go:56] Caching tarball of preloaded images
	I0430 21:16:31.785948   17640 preload.go:173] Found /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0430 21:16:31.785966   17640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0430 21:16:31.786669   17640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/docker-flags-781000/config.json ...
	I0430 21:16:31.786801   17640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/docker-flags-781000/config.json: {Name:mka41eeff33d7d45f08ca6eb41d91202b9ce2bbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0430 21:16:31.836481   17640 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon, skipping pull
	I0430 21:16:31.836526   17640 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e exists in daemon, skipping load
	I0430 21:16:31.836545   17640 cache.go:194] Successfully downloaded all kic artifacts
	I0430 21:16:31.836586   17640 start.go:360] acquireMachinesLock for docker-flags-781000: {Name:mkf69979434565ca6da526d5bb4d5fe1c90b19e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 21:16:31.836755   17640 start.go:364] duration metric: took 156.512µs to acquireMachinesLock for "docker-flags-781000"
	I0430 21:16:31.836782   17640 start.go:93] Provisioning new machine with config: &{Name:docker-flags-781000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:docker-flags-781000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0430 21:16:31.836864   17640 start.go:125] createHost starting for "" (driver="docker")
	I0430 21:16:31.858799   17640 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0430 21:16:31.859200   17640 start.go:159] libmachine.API.Create for "docker-flags-781000" (driver="docker")
	I0430 21:16:31.859255   17640 client.go:168] LocalClient.Create starting
	I0430 21:16:31.859479   17640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 21:16:31.859581   17640 main.go:141] libmachine: Decoding PEM data...
	I0430 21:16:31.859613   17640 main.go:141] libmachine: Parsing certificate...
	I0430 21:16:31.859730   17640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 21:16:31.859803   17640 main.go:141] libmachine: Decoding PEM data...
	I0430 21:16:31.859820   17640 main.go:141] libmachine: Parsing certificate...
	I0430 21:16:31.860695   17640 cli_runner.go:164] Run: docker network inspect docker-flags-781000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 21:16:31.909859   17640 cli_runner.go:211] docker network inspect docker-flags-781000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 21:16:31.909973   17640 network_create.go:281] running [docker network inspect docker-flags-781000] to gather additional debugging logs...
	I0430 21:16:31.909988   17640 cli_runner.go:164] Run: docker network inspect docker-flags-781000
	W0430 21:16:31.958359   17640 cli_runner.go:211] docker network inspect docker-flags-781000 returned with exit code 1
	I0430 21:16:31.958389   17640 network_create.go:284] error running [docker network inspect docker-flags-781000]: docker network inspect docker-flags-781000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-781000 not found
	I0430 21:16:31.958401   17640 network_create.go:286] output of [docker network inspect docker-flags-781000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-781000 not found
	
	** /stderr **
	I0430 21:16:31.958534   17640 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 21:16:32.009874   17640 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:16:32.011466   17640 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:16:32.013103   17640 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:16:32.014787   17640 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:16:32.015302   17640 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00228add0}
	I0430 21:16:32.015319   17640 network_create.go:124] attempt to create docker network docker-flags-781000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0430 21:16:32.015400   17640 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-781000 docker-flags-781000
	I0430 21:16:32.099637   17640 network_create.go:108] docker network docker-flags-781000 192.168.85.0/24 created
	I0430 21:16:32.099676   17640 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-781000" container
	I0430 21:16:32.099782   17640 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 21:16:32.150224   17640 cli_runner.go:164] Run: docker volume create docker-flags-781000 --label name.minikube.sigs.k8s.io=docker-flags-781000 --label created_by.minikube.sigs.k8s.io=true
	I0430 21:16:32.199158   17640 oci.go:103] Successfully created a docker volume docker-flags-781000
	I0430 21:16:32.199270   17640 cli_runner.go:164] Run: docker run --rm --name docker-flags-781000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-781000 --entrypoint /usr/bin/test -v docker-flags-781000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 21:16:32.500533   17640 oci.go:107] Successfully prepared a docker volume docker-flags-781000
	I0430 21:16:32.500586   17640 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 21:16:32.500605   17640 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 21:16:32.500746   17640 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-781000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir
	I0430 21:22:31.862583   17640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 21:22:31.862733   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:22:31.913839   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:22:31.913965   17640 retry.go:31] will retry after 330.059714ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:32.244892   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:22:32.302317   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:22:32.302408   17640 retry.go:31] will retry after 392.301744ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:32.696269   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:22:32.744675   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:22:32.744765   17640 retry.go:31] will retry after 629.063916ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:33.374942   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:22:33.427762   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	W0430 21:22:33.427866   17640 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	
	W0430 21:22:33.427891   17640 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:33.427940   17640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 21:22:33.427999   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:22:33.477163   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:22:33.477272   17640 retry.go:31] will retry after 309.145128ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:33.788694   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:22:33.839145   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:22:33.839239   17640 retry.go:31] will retry after 331.21597ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:34.170616   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:22:34.220475   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:22:34.220576   17640 retry.go:31] will retry after 632.336014ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:34.853942   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:22:34.902695   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	W0430 21:22:34.902799   17640 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	
	W0430 21:22:34.902816   17640 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:34.902833   17640 start.go:128] duration metric: took 6m3.064505368s to createHost
	I0430 21:22:34.902840   17640 start.go:83] releasing machines lock for "docker-flags-781000", held for 6m3.064625193s
	W0430 21:22:34.902856   17640 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0430 21:22:34.903304   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:34.951378   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:34.951434   17640 delete.go:82] Unable to get host status for docker-flags-781000, assuming it has already been deleted: state: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	W0430 21:22:34.951502   17640 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0430 21:22:34.951514   17640 start.go:728] Will try again in 5 seconds ...
	I0430 21:22:39.953005   17640 start.go:360] acquireMachinesLock for docker-flags-781000: {Name:mkf69979434565ca6da526d5bb4d5fe1c90b19e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 21:22:39.953815   17640 start.go:364] duration metric: took 701.52µs to acquireMachinesLock for "docker-flags-781000"
	I0430 21:22:39.954011   17640 start.go:96] Skipping create...Using existing machine configuration
	I0430 21:22:39.954031   17640 fix.go:54] fixHost starting: 
	I0430 21:22:39.954586   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:40.005420   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:40.005465   17640 fix.go:112] recreateIfNeeded on docker-flags-781000: state= err=unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:40.005483   17640 fix.go:117] machineExists: false. err=machine does not exist
	I0430 21:22:40.026203   17640 out.go:177] * docker "docker-flags-781000" container is missing, will recreate.
	I0430 21:22:40.101907   17640 delete.go:124] DEMOLISHING docker-flags-781000 ...
	I0430 21:22:40.102091   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:40.190979   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	W0430 21:22:40.191041   17640 stop.go:83] unable to get state: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:40.191058   17640 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:40.191421   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:40.240294   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:40.240341   17640 delete.go:82] Unable to get host status for docker-flags-781000, assuming it has already been deleted: state: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:40.240410   17640 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-781000
	W0430 21:22:40.287679   17640 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-781000 returned with exit code 1
	I0430 21:22:40.287715   17640 kic.go:371] could not find the container docker-flags-781000 to remove it. will try anyways
	I0430 21:22:40.287784   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:40.335232   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	W0430 21:22:40.335276   17640 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:40.335351   17640 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-781000 /bin/bash -c "sudo init 0"
	W0430 21:22:40.382772   17640 cli_runner.go:211] docker exec --privileged -t docker-flags-781000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0430 21:22:40.382811   17640 oci.go:650] error shutdown docker-flags-781000: docker exec --privileged -t docker-flags-781000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:41.383462   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:41.434360   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:41.434407   17640 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:41.434418   17640 oci.go:664] temporary error: container docker-flags-781000 status is  but expect it to be exited
	I0430 21:22:41.434441   17640 retry.go:31] will retry after 569.420968ms: couldn't verify container is exited. %v: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:42.005757   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:42.056465   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:42.056514   17640 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:42.056525   17640 oci.go:664] temporary error: container docker-flags-781000 status is  but expect it to be exited
	I0430 21:22:42.056548   17640 retry.go:31] will retry after 1.002976231s: couldn't verify container is exited. %v: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:43.061803   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:43.112367   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:43.112410   17640 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:43.112420   17640 oci.go:664] temporary error: container docker-flags-781000 status is  but expect it to be exited
	I0430 21:22:43.112443   17640 retry.go:31] will retry after 651.719601ms: couldn't verify container is exited. %v: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:43.766270   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:43.816679   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:43.816729   17640 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:43.816750   17640 oci.go:664] temporary error: container docker-flags-781000 status is  but expect it to be exited
	I0430 21:22:43.816778   17640 retry.go:31] will retry after 1.99298697s: couldn't verify container is exited. %v: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:45.811446   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:45.860742   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:45.860786   17640 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:45.860795   17640 oci.go:664] temporary error: container docker-flags-781000 status is  but expect it to be exited
	I0430 21:22:45.860821   17640 retry.go:31] will retry after 1.455137045s: couldn't verify container is exited. %v: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:47.318114   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:47.370545   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:47.370591   17640 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:47.370602   17640 oci.go:664] temporary error: container docker-flags-781000 status is  but expect it to be exited
	I0430 21:22:47.370625   17640 retry.go:31] will retry after 4.807154173s: couldn't verify container is exited. %v: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:52.178815   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:52.228668   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:52.228709   17640 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:52.228717   17640 oci.go:664] temporary error: container docker-flags-781000 status is  but expect it to be exited
	I0430 21:22:52.228745   17640 retry.go:31] will retry after 5.260703979s: couldn't verify container is exited. %v: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:57.490731   17640 cli_runner.go:164] Run: docker container inspect docker-flags-781000 --format={{.State.Status}}
	W0430 21:22:57.541913   17640 cli_runner.go:211] docker container inspect docker-flags-781000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:57.541962   17640 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:22:57.541973   17640 oci.go:664] temporary error: container docker-flags-781000 status is  but expect it to be exited
	I0430 21:22:57.542017   17640 oci.go:88] couldn't shut down docker-flags-781000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	 
	I0430 21:22:57.542096   17640 cli_runner.go:164] Run: docker rm -f -v docker-flags-781000
	I0430 21:22:57.591189   17640 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-781000
	W0430 21:22:57.638562   17640 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-781000 returned with exit code 1
	I0430 21:22:57.638674   17640 cli_runner.go:164] Run: docker network inspect docker-flags-781000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 21:22:57.687251   17640 cli_runner.go:164] Run: docker network rm docker-flags-781000
	I0430 21:22:57.785526   17640 fix.go:124] Sleeping 1 second for extra luck!
	I0430 21:22:58.787691   17640 start.go:125] createHost starting for "" (driver="docker")
	I0430 21:22:58.810905   17640 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0430 21:22:58.811071   17640 start.go:159] libmachine.API.Create for "docker-flags-781000" (driver="docker")
	I0430 21:22:58.811095   17640 client.go:168] LocalClient.Create starting
	I0430 21:22:58.811339   17640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 21:22:58.811435   17640 main.go:141] libmachine: Decoding PEM data...
	I0430 21:22:58.811462   17640 main.go:141] libmachine: Parsing certificate...
	I0430 21:22:58.811538   17640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 21:22:58.811611   17640 main.go:141] libmachine: Decoding PEM data...
	I0430 21:22:58.811626   17640 main.go:141] libmachine: Parsing certificate...
	I0430 21:22:58.812320   17640 cli_runner.go:164] Run: docker network inspect docker-flags-781000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 21:22:58.865694   17640 cli_runner.go:211] docker network inspect docker-flags-781000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 21:22:58.865809   17640 network_create.go:281] running [docker network inspect docker-flags-781000] to gather additional debugging logs...
	I0430 21:22:58.865832   17640 cli_runner.go:164] Run: docker network inspect docker-flags-781000
	W0430 21:22:58.913360   17640 cli_runner.go:211] docker network inspect docker-flags-781000 returned with exit code 1
	I0430 21:22:58.913390   17640 network_create.go:284] error running [docker network inspect docker-flags-781000]: docker network inspect docker-flags-781000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-781000 not found
	I0430 21:22:58.913401   17640 network_create.go:286] output of [docker network inspect docker-flags-781000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-781000 not found
	
	** /stderr **
	I0430 21:22:58.913522   17640 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 21:22:58.963296   17640 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:22:58.964669   17640 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:22:58.965966   17640 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:22:58.967334   17640 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:22:58.968751   17640 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:22:58.970278   17640 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:22:58.970617   17640 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021305c0}
	I0430 21:22:58.970629   17640 network_create.go:124] attempt to create docker network docker-flags-781000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0430 21:22:58.970701   17640 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-781000 docker-flags-781000
	I0430 21:22:59.054393   17640 network_create.go:108] docker network docker-flags-781000 192.168.103.0/24 created
	I0430 21:22:59.054430   17640 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-781000" container
	I0430 21:22:59.054540   17640 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 21:22:59.105036   17640 cli_runner.go:164] Run: docker volume create docker-flags-781000 --label name.minikube.sigs.k8s.io=docker-flags-781000 --label created_by.minikube.sigs.k8s.io=true
	I0430 21:22:59.152582   17640 oci.go:103] Successfully created a docker volume docker-flags-781000
	I0430 21:22:59.152691   17640 cli_runner.go:164] Run: docker run --rm --name docker-flags-781000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-781000 --entrypoint /usr/bin/test -v docker-flags-781000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 21:22:59.402617   17640 oci.go:107] Successfully prepared a docker volume docker-flags-781000
	I0430 21:22:59.402648   17640 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 21:22:59.402662   17640 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 21:22:59.402759   17640 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-781000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir
	I0430 21:28:58.813093   17640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 21:28:58.813216   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:28:58.865654   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:28:58.865763   17640 retry.go:31] will retry after 319.300733ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:28:59.187491   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:28:59.239338   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:28:59.239446   17640 retry.go:31] will retry after 295.854508ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:28:59.537659   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:28:59.588222   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:28:59.588327   17640 retry.go:31] will retry after 486.926099ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:00.077661   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:00.128995   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	W0430 21:29:00.129101   17640 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	
	W0430 21:29:00.129128   17640 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:00.129176   17640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 21:29:00.129236   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:00.217997   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:29:00.218085   17640 retry.go:31] will retry after 274.805218ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:00.493928   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:00.545015   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:29:00.545120   17640 retry.go:31] will retry after 527.289122ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:01.074687   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:01.126253   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:29:01.126358   17640 retry.go:31] will retry after 600.321986ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:01.728990   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:01.779863   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	W0430 21:29:01.779969   17640 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	
	W0430 21:29:01.779994   17640 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:01.780006   17640 start.go:128] duration metric: took 6m2.99081692s to createHost
	I0430 21:29:01.780070   17640 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 21:29:01.780123   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:01.828578   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:29:01.828680   17640 retry.go:31] will retry after 127.558422ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:01.956930   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:02.007328   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:29:02.007423   17640 retry.go:31] will retry after 448.879194ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:02.458654   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:02.509046   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:29:02.509150   17640 retry.go:31] will retry after 306.187879ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:02.817526   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:02.868922   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:29:02.869023   17640 retry.go:31] will retry after 661.704727ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:03.532412   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:03.581791   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	W0430 21:29:03.581894   17640 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	
	W0430 21:29:03.581909   17640 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:03.581965   17640 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 21:29:03.582026   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:03.629582   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:29:03.629668   17640 retry.go:31] will retry after 175.216903ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:03.807257   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:03.859980   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:29:03.860070   17640 retry.go:31] will retry after 279.411233ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:04.140451   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:04.190162   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:29:04.190256   17640 retry.go:31] will retry after 554.176273ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:04.745359   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:04.796794   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	I0430 21:29:04.796887   17640 retry.go:31] will retry after 761.905089ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:05.560091   17640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000
	W0430 21:29:05.611305   17640 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000 returned with exit code 1
	W0430 21:29:05.611399   17640 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	
	W0430 21:29:05.611419   17640 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-781000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-781000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	I0430 21:29:05.611432   17640 fix.go:56] duration metric: took 6m25.65586152s for fixHost
	I0430 21:29:05.611438   17640 start.go:83] releasing machines lock for "docker-flags-781000", held for 6m25.656040002s
	W0430 21:29:05.611512   17640 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-781000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-781000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0430 21:29:05.656166   17640 out.go:177] 
	W0430 21:29:05.678250   17640 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0430 21:29:05.678313   17640 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0430 21:29:05.678334   17640 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0430 21:29:05.699814   17640 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-781000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-781000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-781000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (200.852797ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-781000 host status: state: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-781000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-781000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-781000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (197.930114ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-781000 host status: state: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-781000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-781000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-04-30 21:29:06.172698 -0700 PDT m=+7114.457919065
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-781000
helpers_test.go:235: (dbg) docker inspect docker-flags-781000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-781000",
	        "Id": "fccba08a6d65cdcf06b7b82130ed749f577aad0451f6411133eba926b46306a3",
	        "Created": "2024-05-01T04:22:59.015054014Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-781000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-781000 -n docker-flags-781000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-781000 -n docker-flags-781000: exit status 7 (111.010954ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 21:29:06.333622   18964 status.go:249] status error: host: state: unknown state "docker-flags-781000": docker container inspect docker-flags-781000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-781000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-781000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-781000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-781000
--- FAIL: TestDockerFlags (756.16s)

                                                
                                    
x
+
TestForceSystemdFlag (753.83s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-742000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-742000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m32.735105488s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-742000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-742000" primary control-plane node in "force-systemd-flag-742000" cluster
	* Pulling base image v0.0.43-1714386659-18769 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-742000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 21:16:07.271254   17478 out.go:291] Setting OutFile to fd 1 ...
	I0430 21:16:07.271458   17478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 21:16:07.271480   17478 out.go:304] Setting ErrFile to fd 2...
	I0430 21:16:07.271484   17478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 21:16:07.271687   17478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 21:16:07.273258   17478 out.go:298] Setting JSON to false
	I0430 21:16:07.295155   17478 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":8138,"bootTime":1714528829,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0430 21:16:07.295257   17478 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0430 21:16:07.317019   17478 out.go:177] * [force-systemd-flag-742000] minikube v1.33.0 on Darwin 14.4.1
	I0430 21:16:07.338658   17478 out.go:177]   - MINIKUBE_LOCATION=18779
	I0430 21:16:07.338727   17478 notify.go:220] Checking for updates...
	I0430 21:16:07.359906   17478 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	I0430 21:16:07.381961   17478 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0430 21:16:07.423986   17478 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0430 21:16:07.445924   17478 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	I0430 21:16:07.466706   17478 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0430 21:16:07.488823   17478 config.go:182] Loaded profile config "force-systemd-env-157000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 21:16:07.488965   17478 driver.go:392] Setting default libvirt URI to qemu:///system
	I0430 21:16:07.543251   17478 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0430 21:16:07.543424   17478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 21:16:07.650404   17478 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:113 OomKillDisable:false NGoroutines:225 SystemTime:2024-05-01 04:16:07.639618255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 21:16:07.672441   17478 out.go:177] * Using the docker driver based on user configuration
	I0430 21:16:07.693957   17478 start.go:297] selected driver: docker
	I0430 21:16:07.693996   17478 start.go:901] validating driver "docker" against <nil>
	I0430 21:16:07.694013   17478 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0430 21:16:07.698442   17478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 21:16:07.805628   17478 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:113 OomKillDisable:false NGoroutines:225 SystemTime:2024-05-01 04:16:07.794992361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.
12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-d
ev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/li
b/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 21:16:07.805809   17478 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0430 21:16:07.805991   17478 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0430 21:16:07.827815   17478 out.go:177] * Using Docker Desktop driver with root privileges
	I0430 21:16:07.849797   17478 cni.go:84] Creating CNI manager for ""
	I0430 21:16:07.849877   17478 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0430 21:16:07.849894   17478 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0430 21:16:07.849994   17478 start.go:340] cluster config:
	{Name:force-systemd-flag-742000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-742000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 21:16:07.871505   17478 out.go:177] * Starting "force-systemd-flag-742000" primary control-plane node in "force-systemd-flag-742000" cluster
	I0430 21:16:07.913714   17478 cache.go:121] Beginning downloading kic base image for docker with docker
	I0430 21:16:07.935278   17478 out.go:177] * Pulling base image v0.0.43-1714386659-18769 ...
	I0430 21:16:07.977656   17478 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 21:16:07.977695   17478 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon
	I0430 21:16:07.977726   17478 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0430 21:16:07.977750   17478 cache.go:56] Caching tarball of preloaded images
	I0430 21:16:07.977965   17478 preload.go:173] Found /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0430 21:16:07.977989   17478 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0430 21:16:07.978150   17478 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/force-systemd-flag-742000/config.json ...
	I0430 21:16:07.978882   17478 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/force-systemd-flag-742000/config.json: {Name:mk3c640ee97511351ed3706dfc29ff1684a3db48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0430 21:16:08.026841   17478 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon, skipping pull
	I0430 21:16:08.026865   17478 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e exists in daemon, skipping load
	I0430 21:16:08.026885   17478 cache.go:194] Successfully downloaded all kic artifacts
	I0430 21:16:08.026943   17478 start.go:360] acquireMachinesLock for force-systemd-flag-742000: {Name:mkefd25a8bab5bcd19ab133556eec548df8e7da8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 21:16:08.027118   17478 start.go:364] duration metric: took 163.298µs to acquireMachinesLock for "force-systemd-flag-742000"
	I0430 21:16:08.027147   17478 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-742000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-742000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0430 21:16:08.027207   17478 start.go:125] createHost starting for "" (driver="docker")
	I0430 21:16:08.048569   17478 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0430 21:16:08.048946   17478 start.go:159] libmachine.API.Create for "force-systemd-flag-742000" (driver="docker")
	I0430 21:16:08.048998   17478 client.go:168] LocalClient.Create starting
	I0430 21:16:08.049173   17478 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 21:16:08.049267   17478 main.go:141] libmachine: Decoding PEM data...
	I0430 21:16:08.049297   17478 main.go:141] libmachine: Parsing certificate...
	I0430 21:16:08.049418   17478 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 21:16:08.049494   17478 main.go:141] libmachine: Decoding PEM data...
	I0430 21:16:08.049509   17478 main.go:141] libmachine: Parsing certificate...
	I0430 21:16:08.050407   17478 cli_runner.go:164] Run: docker network inspect force-systemd-flag-742000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 21:16:08.099004   17478 cli_runner.go:211] docker network inspect force-systemd-flag-742000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 21:16:08.099106   17478 network_create.go:281] running [docker network inspect force-systemd-flag-742000] to gather additional debugging logs...
	I0430 21:16:08.099121   17478 cli_runner.go:164] Run: docker network inspect force-systemd-flag-742000
	W0430 21:16:08.147321   17478 cli_runner.go:211] docker network inspect force-systemd-flag-742000 returned with exit code 1
	I0430 21:16:08.147356   17478 network_create.go:284] error running [docker network inspect force-systemd-flag-742000]: docker network inspect force-systemd-flag-742000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-742000 not found
	I0430 21:16:08.147370   17478 network_create.go:286] output of [docker network inspect force-systemd-flag-742000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-742000 not found
	
	** /stderr **
	I0430 21:16:08.147499   17478 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 21:16:08.197072   17478 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:16:08.198707   17478 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:16:08.199079   17478 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000c5cd60}
	I0430 21:16:08.199110   17478 network_create.go:124] attempt to create docker network force-systemd-flag-742000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0430 21:16:08.199180   17478 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-742000 force-systemd-flag-742000
	W0430 21:16:08.247488   17478 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-742000 force-systemd-flag-742000 returned with exit code 1
	W0430 21:16:08.247526   17478 network_create.go:149] failed to create docker network force-systemd-flag-742000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-742000 force-systemd-flag-742000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0430 21:16:08.247547   17478 network_create.go:116] failed to create docker network force-systemd-flag-742000 192.168.67.0/24, will retry: subnet is taken
	I0430 21:16:08.248934   17478 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:16:08.249283   17478 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000c5dce0}
	I0430 21:16:08.249294   17478 network_create.go:124] attempt to create docker network force-systemd-flag-742000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0430 21:16:08.249362   17478 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-742000 force-systemd-flag-742000
	I0430 21:16:08.333844   17478 network_create.go:108] docker network force-systemd-flag-742000 192.168.76.0/24 created
	I0430 21:16:08.333898   17478 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-742000" container
	I0430 21:16:08.334019   17478 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 21:16:08.384134   17478 cli_runner.go:164] Run: docker volume create force-systemd-flag-742000 --label name.minikube.sigs.k8s.io=force-systemd-flag-742000 --label created_by.minikube.sigs.k8s.io=true
	I0430 21:16:08.433537   17478 oci.go:103] Successfully created a docker volume force-systemd-flag-742000
	I0430 21:16:08.433646   17478 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-742000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-742000 --entrypoint /usr/bin/test -v force-systemd-flag-742000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 21:16:08.752861   17478 oci.go:107] Successfully prepared a docker volume force-systemd-flag-742000
	I0430 21:16:08.752912   17478 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 21:16:08.752926   17478 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 21:16:08.753042   17478 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-742000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir
	I0430 21:22:08.050871   17478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 21:22:08.051019   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:22:08.100215   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:22:08.100355   17478 retry.go:31] will retry after 237.515694ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:08.339621   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:22:08.389733   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:22:08.389854   17478 retry.go:31] will retry after 278.54613ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:08.670807   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:22:08.723346   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:22:08.723484   17478 retry.go:31] will retry after 773.909023ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:09.498276   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:22:09.549753   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	W0430 21:22:09.549854   17478 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	
	W0430 21:22:09.549878   17478 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:09.549946   17478 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 21:22:09.550002   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:22:09.598807   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:22:09.598902   17478 retry.go:31] will retry after 285.265252ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:09.886530   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:22:09.939735   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:22:09.939842   17478 retry.go:31] will retry after 390.696855ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:10.331507   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:22:10.383411   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:22:10.383509   17478 retry.go:31] will retry after 532.761377ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:10.918683   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:22:10.982453   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:22:10.982562   17478 retry.go:31] will retry after 520.15413ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:11.505124   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:22:11.557463   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	W0430 21:22:11.557572   17478 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	
	W0430 21:22:11.557594   17478 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:11.557611   17478 start.go:128] duration metric: took 6m3.528936961s to createHost
	I0430 21:22:11.557619   17478 start.go:83] releasing machines lock for "force-systemd-flag-742000", held for 6m3.529038746s
	W0430 21:22:11.557634   17478 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0430 21:22:11.558058   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:11.607943   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:11.608002   17478 delete.go:82] Unable to get host status for force-systemd-flag-742000, assuming it has already been deleted: state: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	W0430 21:22:11.608101   17478 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0430 21:22:11.608110   17478 start.go:728] Will try again in 5 seconds ...
	I0430 21:22:16.609203   17478 start.go:360] acquireMachinesLock for force-systemd-flag-742000: {Name:mkefd25a8bab5bcd19ab133556eec548df8e7da8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 21:22:16.609499   17478 start.go:364] duration metric: took 165.095µs to acquireMachinesLock for "force-systemd-flag-742000"
	I0430 21:22:16.609544   17478 start.go:96] Skipping create...Using existing machine configuration
	I0430 21:22:16.609561   17478 fix.go:54] fixHost starting: 
	I0430 21:22:16.609992   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:16.660767   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:16.660816   17478 fix.go:112] recreateIfNeeded on force-systemd-flag-742000: state= err=unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:16.660837   17478 fix.go:117] machineExists: false. err=machine does not exist
	I0430 21:22:16.682727   17478 out.go:177] * docker "force-systemd-flag-742000" container is missing, will recreate.
	I0430 21:22:16.725393   17478 delete.go:124] DEMOLISHING force-systemd-flag-742000 ...
	I0430 21:22:16.725606   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:16.774124   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	W0430 21:22:16.774175   17478 stop.go:83] unable to get state: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:16.774194   17478 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:16.774573   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:16.822450   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:16.822502   17478 delete.go:82] Unable to get host status for force-systemd-flag-742000, assuming it has already been deleted: state: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:16.822583   17478 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-742000
	W0430 21:22:16.870460   17478 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-742000 returned with exit code 1
	I0430 21:22:16.870519   17478 kic.go:371] could not find the container force-systemd-flag-742000 to remove it. will try anyways
	I0430 21:22:16.870592   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:16.918618   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	W0430 21:22:16.918667   17478 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:16.918745   17478 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-742000 /bin/bash -c "sudo init 0"
	W0430 21:22:16.966657   17478 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-742000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0430 21:22:16.966692   17478 oci.go:650] error shutdown force-systemd-flag-742000: docker exec --privileged -t force-systemd-flag-742000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:17.969087   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:18.021141   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:18.021200   17478 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:18.021214   17478 oci.go:664] temporary error: container force-systemd-flag-742000 status is  but expect it to be exited
	I0430 21:22:18.021241   17478 retry.go:31] will retry after 364.167121ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:18.387787   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:18.441404   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:18.441454   17478 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:18.441464   17478 oci.go:664] temporary error: container force-systemd-flag-742000 status is  but expect it to be exited
	I0430 21:22:18.441490   17478 retry.go:31] will retry after 890.118338ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:19.333728   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:19.384835   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:19.384896   17478 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:19.384905   17478 oci.go:664] temporary error: container force-systemd-flag-742000 status is  but expect it to be exited
	I0430 21:22:19.384930   17478 retry.go:31] will retry after 571.773456ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:19.957074   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:20.008510   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:20.008553   17478 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:20.008567   17478 oci.go:664] temporary error: container force-systemd-flag-742000 status is  but expect it to be exited
	I0430 21:22:20.008593   17478 retry.go:31] will retry after 1.688065337s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:21.697403   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:21.752050   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:21.752110   17478 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:21.752121   17478 oci.go:664] temporary error: container force-systemd-flag-742000 status is  but expect it to be exited
	I0430 21:22:21.752147   17478 retry.go:31] will retry after 3.735644204s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:25.489491   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:25.540196   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:25.540243   17478 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:25.540252   17478 oci.go:664] temporary error: container force-systemd-flag-742000 status is  but expect it to be exited
	I0430 21:22:25.540275   17478 retry.go:31] will retry after 2.725070703s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:28.267723   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:28.319479   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:28.319530   17478 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:28.319539   17478 oci.go:664] temporary error: container force-systemd-flag-742000 status is  but expect it to be exited
	I0430 21:22:28.319564   17478 retry.go:31] will retry after 4.137211562s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:32.459111   17478 cli_runner.go:164] Run: docker container inspect force-systemd-flag-742000 --format={{.State.Status}}
	W0430 21:22:32.510628   17478 cli_runner.go:211] docker container inspect force-systemd-flag-742000 --format={{.State.Status}} returned with exit code 1
	I0430 21:22:32.510676   17478 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:22:32.510690   17478 oci.go:664] temporary error: container force-systemd-flag-742000 status is  but expect it to be exited
	I0430 21:22:32.510724   17478 oci.go:88] couldn't shut down force-systemd-flag-742000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	 
	I0430 21:22:32.510797   17478 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-742000
	I0430 21:22:32.560791   17478 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-742000
	W0430 21:22:32.608757   17478 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-742000 returned with exit code 1
	I0430 21:22:32.608875   17478 cli_runner.go:164] Run: docker network inspect force-systemd-flag-742000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 21:22:32.657795   17478 cli_runner.go:164] Run: docker network rm force-systemd-flag-742000
	I0430 21:22:32.760018   17478 fix.go:124] Sleeping 1 second for extra luck!
	I0430 21:22:33.762175   17478 start.go:125] createHost starting for "" (driver="docker")
	I0430 21:22:33.785534   17478 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0430 21:22:33.785657   17478 start.go:159] libmachine.API.Create for "force-systemd-flag-742000" (driver="docker")
	I0430 21:22:33.785677   17478 client.go:168] LocalClient.Create starting
	I0430 21:22:33.785840   17478 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 21:22:33.785919   17478 main.go:141] libmachine: Decoding PEM data...
	I0430 21:22:33.785939   17478 main.go:141] libmachine: Parsing certificate...
	I0430 21:22:33.786016   17478 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 21:22:33.786081   17478 main.go:141] libmachine: Decoding PEM data...
	I0430 21:22:33.786104   17478 main.go:141] libmachine: Parsing certificate...
	I0430 21:22:33.807094   17478 cli_runner.go:164] Run: docker network inspect force-systemd-flag-742000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 21:22:33.859453   17478 cli_runner.go:211] docker network inspect force-systemd-flag-742000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 21:22:33.859551   17478 network_create.go:281] running [docker network inspect force-systemd-flag-742000] to gather additional debugging logs...
	I0430 21:22:33.859570   17478 cli_runner.go:164] Run: docker network inspect force-systemd-flag-742000
	W0430 21:22:33.907703   17478 cli_runner.go:211] docker network inspect force-systemd-flag-742000 returned with exit code 1
	I0430 21:22:33.907733   17478 network_create.go:284] error running [docker network inspect force-systemd-flag-742000]: docker network inspect force-systemd-flag-742000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-742000 not found
	I0430 21:22:33.907750   17478 network_create.go:286] output of [docker network inspect force-systemd-flag-742000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-742000 not found
	
	** /stderr **
	I0430 21:22:33.907903   17478 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 21:22:33.958558   17478 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:22:33.960111   17478 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:22:33.961676   17478 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:22:33.963232   17478 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:22:33.964717   17478 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:22:33.965048   17478 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000c5d4c0}
	I0430 21:22:33.965059   17478 network_create.go:124] attempt to create docker network force-systemd-flag-742000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0430 21:22:33.965123   17478 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-742000 force-systemd-flag-742000
	I0430 21:22:34.048950   17478 network_create.go:108] docker network force-systemd-flag-742000 192.168.94.0/24 created
	I0430 21:22:34.048989   17478 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-742000" container
	I0430 21:22:34.049087   17478 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 21:22:34.099191   17478 cli_runner.go:164] Run: docker volume create force-systemd-flag-742000 --label name.minikube.sigs.k8s.io=force-systemd-flag-742000 --label created_by.minikube.sigs.k8s.io=true
	I0430 21:22:34.147344   17478 oci.go:103] Successfully created a docker volume force-systemd-flag-742000
	I0430 21:22:34.147455   17478 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-742000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-742000 --entrypoint /usr/bin/test -v force-systemd-flag-742000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 21:22:34.397926   17478 oci.go:107] Successfully prepared a docker volume force-systemd-flag-742000
	I0430 21:22:34.397960   17478 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 21:22:34.397973   17478 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 21:22:34.398078   17478 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-742000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir
	I0430 21:28:33.789529   17478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 21:28:33.789656   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:33.841279   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:33.841399   17478 retry.go:31] will retry after 200.99443ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:34.044854   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:34.094787   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:34.094907   17478 retry.go:31] will retry after 282.878683ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:34.380142   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:34.434572   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:34.434684   17478 retry.go:31] will retry after 465.049668ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:34.901063   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:34.952601   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:34.952718   17478 retry.go:31] will retry after 607.191445ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:35.562329   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:35.613370   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	W0430 21:28:35.613487   17478 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	
	W0430 21:28:35.613507   17478 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:35.613567   17478 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 21:28:35.613622   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:35.663387   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:35.663484   17478 retry.go:31] will retry after 284.381319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:35.949364   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:36.000047   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:36.000141   17478 retry.go:31] will retry after 343.020886ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:36.345554   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:36.397269   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:36.397370   17478 retry.go:31] will retry after 576.751769ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:36.976532   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:37.028133   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	W0430 21:28:37.028247   17478 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	
	W0430 21:28:37.028267   17478 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:37.028283   17478 start.go:128] duration metric: took 6m3.26459375s to createHost
	I0430 21:28:37.028358   17478 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 21:28:37.028413   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:37.077234   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:37.077327   17478 retry.go:31] will retry after 144.959259ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:37.224716   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:37.298844   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:37.299000   17478 retry.go:31] will retry after 524.756782ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:37.824475   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:37.875259   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:37.875358   17478 retry.go:31] will retry after 698.33556ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:38.575149   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:38.628457   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	W0430 21:28:38.628558   17478 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	
	W0430 21:28:38.628573   17478 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:38.628646   17478 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 21:28:38.628705   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:38.677539   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:38.677631   17478 retry.go:31] will retry after 166.300784ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:38.846317   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:38.895793   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:38.895882   17478 retry.go:31] will retry after 317.506393ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:39.213884   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:39.265249   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	I0430 21:28:39.265353   17478 retry.go:31] will retry after 480.145561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:39.747891   17478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000
	W0430 21:28:39.798078   17478 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000 returned with exit code 1
	W0430 21:28:39.798181   17478 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	
	W0430 21:28:39.798199   17478 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-742000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-742000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	I0430 21:28:39.798211   17478 fix.go:56] duration metric: took 6m23.187120886s for fixHost
	I0430 21:28:39.798220   17478 start.go:83] releasing machines lock for "force-systemd-flag-742000", held for 6m23.18717116s
	W0430 21:28:39.798294   17478 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-742000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-742000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0430 21:28:39.840710   17478 out.go:177] 
	W0430 21:28:39.863870   17478 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0430 21:28:39.863922   17478 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0430 21:28:39.863954   17478 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0430 21:28:39.885892   17478 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-742000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-742000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-742000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (198.951215ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-742000 host status: state: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-742000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-04-30 21:28:40.16004 -0700 PDT m=+7088.445365276
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-742000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-742000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-742000",
	        "Id": "454177735b51bef09f4e90b87bb53abf5f8e0ed997294c45b43c5faa9a7cd009",
	        "Created": "2024-05-01T04:22:34.009538152Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-742000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-742000 -n force-systemd-flag-742000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-742000 -n force-systemd-flag-742000: exit status 7 (111.807441ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 21:28:40.321684   18812 status.go:249] status error: host: state: unknown state "force-systemd-flag-742000": docker container inspect force-systemd-flag-742000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-742000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-742000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-742000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-742000
--- FAIL: TestForceSystemdFlag (753.83s)

                                                
                                    
x
+
TestForceSystemdEnv (750.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-157000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0430 21:04:09.857608    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 21:06:06.802910    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 21:06:41.439044    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 21:09:44.617015    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 21:11:06.932781    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 21:11:41.568828    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-157000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m29.727878733s)

                                                
                                                
-- stdout --
	* [force-systemd-env-157000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-157000" primary control-plane node in "force-systemd-env-157000" cluster
	* Pulling base image v0.0.43-1714386659-18769 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-157000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 21:03:59.971224   16723 out.go:291] Setting OutFile to fd 1 ...
	I0430 21:03:59.971974   16723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 21:03:59.971983   16723 out.go:304] Setting ErrFile to fd 2...
	I0430 21:03:59.971990   16723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 21:03:59.972527   16723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 21:03:59.974095   16723 out.go:298] Setting JSON to false
	I0430 21:03:59.996092   16723 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7410,"bootTime":1714528829,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0430 21:03:59.996184   16723 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0430 21:04:00.018181   16723 out.go:177] * [force-systemd-env-157000] minikube v1.33.0 on Darwin 14.4.1
	I0430 21:04:00.080870   16723 out.go:177]   - MINIKUBE_LOCATION=18779
	I0430 21:04:00.059960   16723 notify.go:220] Checking for updates...
	I0430 21:04:00.143775   16723 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	I0430 21:04:00.165153   16723 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0430 21:04:00.185925   16723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0430 21:04:00.206827   16723 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	I0430 21:04:00.228114   16723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0430 21:04:00.249427   16723 config.go:182] Loaded profile config "offline-docker-844000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 21:04:00.249506   16723 driver.go:392] Setting default libvirt URI to qemu:///system
	I0430 21:04:00.302190   16723 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0430 21:04:00.302348   16723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 21:04:00.407457   16723 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:105 OomKillDisable:false NGoroutines:195 SystemTime:2024-05-01 04:04:00.396957101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 21:04:00.449569   16723 out.go:177] * Using the docker driver based on user configuration
	I0430 21:04:00.470721   16723 start.go:297] selected driver: docker
	I0430 21:04:00.470778   16723 start.go:901] validating driver "docker" against <nil>
	I0430 21:04:00.470793   16723 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0430 21:04:00.475190   16723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 21:04:00.581734   16723 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:105 OomKillDisable:false NGoroutines:195 SystemTime:2024-05-01 04:04:00.571652769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.1
2-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-de
v SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib
/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 21:04:00.581923   16723 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0430 21:04:00.582123   16723 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0430 21:04:00.603655   16723 out.go:177] * Using Docker Desktop driver with root privileges
	I0430 21:04:00.626750   16723 cni.go:84] Creating CNI manager for ""
	I0430 21:04:00.626793   16723 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0430 21:04:00.626817   16723 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0430 21:04:00.626907   16723 start.go:340] cluster config:
	{Name:force-systemd-env-157000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 21:04:00.649301   16723 out.go:177] * Starting "force-systemd-env-157000" primary control-plane node in "force-systemd-env-157000" cluster
	I0430 21:04:00.691699   16723 cache.go:121] Beginning downloading kic base image for docker with docker
	I0430 21:04:00.713492   16723 out.go:177] * Pulling base image v0.0.43-1714386659-18769 ...
	I0430 21:04:00.755748   16723 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 21:04:00.755793   16723 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon
	I0430 21:04:00.755841   16723 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0430 21:04:00.755862   16723 cache.go:56] Caching tarball of preloaded images
	I0430 21:04:00.756110   16723 preload.go:173] Found /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0430 21:04:00.756133   16723 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0430 21:04:00.757087   16723 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/force-systemd-env-157000/config.json ...
	I0430 21:04:00.757209   16723 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/force-systemd-env-157000/config.json: {Name:mk9f53b5b4e19cfd271bfbbbb2d52efb6d6bcb4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0430 21:04:00.807950   16723 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon, skipping pull
	I0430 21:04:00.807987   16723 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e exists in daemon, skipping load
	I0430 21:04:00.808005   16723 cache.go:194] Successfully downloaded all kic artifacts
	I0430 21:04:00.808040   16723 start.go:360] acquireMachinesLock for force-systemd-env-157000: {Name:mk039493e2a423f5217ef6c9b64e1e2fbcf7939b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 21:04:00.808199   16723 start.go:364] duration metric: took 148.244µs to acquireMachinesLock for "force-systemd-env-157000"
	I0430 21:04:00.808228   16723 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-157000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-env-157000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0430 21:04:00.808457   16723 start.go:125] createHost starting for "" (driver="docker")
	I0430 21:04:00.851739   16723 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0430 21:04:00.852107   16723 start.go:159] libmachine.API.Create for "force-systemd-env-157000" (driver="docker")
	I0430 21:04:00.852158   16723 client.go:168] LocalClient.Create starting
	I0430 21:04:00.852392   16723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 21:04:00.852504   16723 main.go:141] libmachine: Decoding PEM data...
	I0430 21:04:00.852538   16723 main.go:141] libmachine: Parsing certificate...
	I0430 21:04:00.852631   16723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 21:04:00.852710   16723 main.go:141] libmachine: Decoding PEM data...
	I0430 21:04:00.852725   16723 main.go:141] libmachine: Parsing certificate...
	I0430 21:04:00.853669   16723 cli_runner.go:164] Run: docker network inspect force-systemd-env-157000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 21:04:00.918866   16723 cli_runner.go:211] docker network inspect force-systemd-env-157000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 21:04:00.918968   16723 network_create.go:281] running [docker network inspect force-systemd-env-157000] to gather additional debugging logs...
	I0430 21:04:00.918984   16723 cli_runner.go:164] Run: docker network inspect force-systemd-env-157000
	W0430 21:04:00.966349   16723 cli_runner.go:211] docker network inspect force-systemd-env-157000 returned with exit code 1
	I0430 21:04:00.966378   16723 network_create.go:284] error running [docker network inspect force-systemd-env-157000]: docker network inspect force-systemd-env-157000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-157000 not found
	I0430 21:04:00.966396   16723 network_create.go:286] output of [docker network inspect force-systemd-env-157000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-157000 not found
	
	** /stderr **
	I0430 21:04:00.966510   16723 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 21:04:01.015778   16723 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:04:01.017348   16723 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:04:01.019047   16723 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:04:01.020805   16723 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:04:01.021418   16723 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002402dd0}
	I0430 21:04:01.021440   16723 network_create.go:124] attempt to create docker network force-systemd-env-157000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0430 21:04:01.021541   16723 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-157000 force-systemd-env-157000
	I0430 21:04:01.104412   16723 network_create.go:108] docker network force-systemd-env-157000 192.168.85.0/24 created
	I0430 21:04:01.104454   16723 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-157000" container
	I0430 21:04:01.104558   16723 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 21:04:01.156722   16723 cli_runner.go:164] Run: docker volume create force-systemd-env-157000 --label name.minikube.sigs.k8s.io=force-systemd-env-157000 --label created_by.minikube.sigs.k8s.io=true
	I0430 21:04:01.204971   16723 oci.go:103] Successfully created a docker volume force-systemd-env-157000
	I0430 21:04:01.205081   16723 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-157000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-157000 --entrypoint /usr/bin/test -v force-systemd-env-157000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 21:04:01.513875   16723 oci.go:107] Successfully prepared a docker volume force-systemd-env-157000
	I0430 21:04:01.513928   16723 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 21:04:01.513940   16723 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 21:04:01.514040   16723 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-157000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir
	I0430 21:10:00.985985   16723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 21:10:00.986126   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:10:01.040017   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:10:01.040156   16723 retry.go:31] will retry after 309.150545ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:01.351702   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:10:01.400031   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:10:01.400144   16723 retry.go:31] will retry after 453.962054ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:01.856291   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:10:01.907005   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:10:01.907109   16723 retry.go:31] will retry after 530.426359ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:02.438987   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:10:02.489989   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	W0430 21:10:02.490088   16723 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	
	W0430 21:10:02.490109   16723 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:02.490159   16723 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 21:10:02.490211   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:10:02.538302   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:10:02.538391   16723 retry.go:31] will retry after 147.816117ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:02.687444   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:10:02.738903   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:10:02.738996   16723 retry.go:31] will retry after 216.50989ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:02.957223   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:10:03.006663   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:10:03.006755   16723 retry.go:31] will retry after 462.360942ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:03.471503   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:10:03.523290   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	W0430 21:10:03.523384   16723 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	
	W0430 21:10:03.523400   16723 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:03.523416   16723 start.go:128] duration metric: took 6m2.583471936s to createHost
	I0430 21:10:03.523424   16723 start.go:83] releasing machines lock for "force-systemd-env-157000", held for 6m2.583754856s
	W0430 21:10:03.523439   16723 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0430 21:10:03.523862   16723 cli_runner.go:164] Run: docker container inspect force-systemd-env-157000 --format={{.State.Status}}
	W0430 21:10:03.572928   16723 cli_runner.go:211] docker container inspect force-systemd-env-157000 --format={{.State.Status}} returned with exit code 1
	I0430 21:10:03.572983   16723 delete.go:82] Unable to get host status for force-systemd-env-157000, assuming it has already been deleted: state: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	W0430 21:10:03.573069   16723 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0430 21:10:03.573079   16723 start.go:728] Will try again in 5 seconds ...
	I0430 21:10:08.575343   16723 start.go:360] acquireMachinesLock for force-systemd-env-157000: {Name:mk039493e2a423f5217ef6c9b64e1e2fbcf7939b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 21:10:08.575548   16723 start.go:364] duration metric: took 162.372µs to acquireMachinesLock for "force-systemd-env-157000"
	I0430 21:10:08.575604   16723 start.go:96] Skipping create...Using existing machine configuration
	I0430 21:10:08.575624   16723 fix.go:54] fixHost starting: 
	I0430 21:10:08.576049   16723 cli_runner.go:164] Run: docker container inspect force-systemd-env-157000 --format={{.State.Status}}
	W0430 21:10:08.628953   16723 cli_runner.go:211] docker container inspect force-systemd-env-157000 --format={{.State.Status}} returned with exit code 1
	I0430 21:10:08.628999   16723 fix.go:112] recreateIfNeeded on force-systemd-env-157000: state= err=unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:08.629022   16723 fix.go:117] machineExists: false. err=machine does not exist
	I0430 21:10:08.651000   16723 out.go:177] * docker "force-systemd-env-157000" container is missing, will recreate.
	I0430 21:10:08.693390   16723 delete.go:124] DEMOLISHING force-systemd-env-157000 ...
	I0430 21:10:08.693599   16723 cli_runner.go:164] Run: docker container inspect force-systemd-env-157000 --format={{.State.Status}}
	W0430 21:10:08.743072   16723 cli_runner.go:211] docker container inspect force-systemd-env-157000 --format={{.State.Status}} returned with exit code 1
	W0430 21:10:08.743135   16723 stop.go:83] unable to get state: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:08.743155   16723 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:08.743540   16723 cli_runner.go:164] Run: docker container inspect force-systemd-env-157000 --format={{.State.Status}}
	W0430 21:10:08.791403   16723 cli_runner.go:211] docker container inspect force-systemd-env-157000 --format={{.State.Status}} returned with exit code 1
	I0430 21:10:08.791467   16723 delete.go:82] Unable to get host status for force-systemd-env-157000, assuming it has already been deleted: state: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:08.791564   16723 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-157000
	W0430 21:10:08.838871   16723 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-157000 returned with exit code 1
	I0430 21:10:08.838905   16723 kic.go:371] could not find the container force-systemd-env-157000 to remove it. will try anyways
	I0430 21:10:08.838981   16723 cli_runner.go:164] Run: docker container inspect force-systemd-env-157000 --format={{.State.Status}}
	W0430 21:10:08.886582   16723 cli_runner.go:211] docker container inspect force-systemd-env-157000 --format={{.State.Status}} returned with exit code 1
	W0430 21:10:08.886641   16723 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:08.886717   16723 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-157000 /bin/bash -c "sudo init 0"
	W0430 21:10:08.934828   16723 cli_runner.go:211] docker exec --privileged -t force-systemd-env-157000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0430 21:10:08.934865   16723 oci.go:650] error shutdown force-systemd-env-157000: docker exec --privileged -t force-systemd-env-157000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:09.935543   16723 cli_runner.go:164] Run: docker container inspect force-systemd-env-157000 --format={{.State.Status}}
	W0430 21:10:09.986420   16723 cli_runner.go:211] docker container inspect force-systemd-env-157000 --format={{.State.Status}} returned with exit code 1
	I0430 21:10:09.986462   16723 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:09.986473   16723 oci.go:664] temporary error: container force-systemd-env-157000 status is  but expect it to be exited
	I0430 21:10:09.986500   16723 retry.go:31] will retry after 509.364502ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:10.498273   16723 cli_runner.go:164] Run: docker container inspect force-systemd-env-157000 --format={{.State.Status}}
	W0430 21:10:10.549229   16723 cli_runner.go:211] docker container inspect force-systemd-env-157000 --format={{.State.Status}} returned with exit code 1
	I0430 21:10:10.549278   16723 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:10.549289   16723 oci.go:664] temporary error: container force-systemd-env-157000 status is  but expect it to be exited
	I0430 21:10:10.549317   16723 retry.go:31] will retry after 781.244579ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:11.332439   16723 cli_runner.go:164] Run: docker container inspect force-systemd-env-157000 --format={{.State.Status}}
	W0430 21:10:11.384083   16723 cli_runner.go:211] docker container inspect force-systemd-env-157000 --format={{.State.Status}} returned with exit code 1
	I0430 21:10:11.384138   16723 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:11.384150   16723 oci.go:664] temporary error: container force-systemd-env-157000 status is  but expect it to be exited
	I0430 21:10:11.384177   16723 retry.go:31] will retry after 760.059546ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:12.144862   16723 cli_runner.go:164] Run: docker container inspect force-systemd-env-157000 --format={{.State.Status}}
	W0430 21:10:12.194972   16723 cli_runner.go:211] docker container inspect force-systemd-env-157000 --format={{.State.Status}} returned with exit code 1
	I0430 21:10:12.195018   16723 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:12.195030   16723 oci.go:664] temporary error: container force-systemd-env-157000 status is  but expect it to be exited
	I0430 21:10:12.195056   16723 retry.go:31] will retry after 2.286543891s: couldn't verify container is exited. %v: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:14.483935   16723 cli_runner.go:164] Run: docker container inspect force-systemd-env-157000 --format={{.State.Status}}
	W0430 21:10:14.535920   16723 cli_runner.go:211] docker container inspect force-systemd-env-157000 --format={{.State.Status}} returned with exit code 1
	I0430 21:10:14.535974   16723 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:14.535986   16723 oci.go:664] temporary error: container force-systemd-env-157000 status is  but expect it to be exited
	I0430 21:10:14.536023   16723 retry.go:31] will retry after 1.709140631s: couldn't verify container is exited. %v: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:16.245753   16723 cli_runner.go:164] Run: docker container inspect force-systemd-env-157000 --format={{.State.Status}}
	W0430 21:10:16.294766   16723 cli_runner.go:211] docker container inspect force-systemd-env-157000 --format={{.State.Status}} returned with exit code 1
	I0430 21:10:16.294816   16723 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:16.294826   16723 oci.go:664] temporary error: container force-systemd-env-157000 status is  but expect it to be exited
	I0430 21:10:16.294853   16723 retry.go:31] will retry after 5.453060554s: couldn't verify container is exited. %v: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:21.749574   16723 cli_runner.go:164] Run: docker container inspect force-systemd-env-157000 --format={{.State.Status}}
	W0430 21:10:21.799623   16723 cli_runner.go:211] docker container inspect force-systemd-env-157000 --format={{.State.Status}} returned with exit code 1
	I0430 21:10:21.799674   16723 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:10:21.799682   16723 oci.go:664] temporary error: container force-systemd-env-157000 status is  but expect it to be exited
	I0430 21:10:21.799712   16723 oci.go:88] couldn't shut down force-systemd-env-157000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	 
	I0430 21:10:21.799791   16723 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-157000
	I0430 21:10:21.847939   16723 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-157000
	W0430 21:10:21.895810   16723 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-157000 returned with exit code 1
	I0430 21:10:21.895921   16723 cli_runner.go:164] Run: docker network inspect force-systemd-env-157000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 21:10:21.944736   16723 cli_runner.go:164] Run: docker network rm force-systemd-env-157000
	I0430 21:10:22.048850   16723 fix.go:124] Sleeping 1 second for extra luck!
	I0430 21:10:23.051017   16723 start.go:125] createHost starting for "" (driver="docker")
	I0430 21:10:23.074139   16723 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0430 21:10:23.074297   16723 start.go:159] libmachine.API.Create for "force-systemd-env-157000" (driver="docker")
	I0430 21:10:23.074331   16723 client.go:168] LocalClient.Create starting
	I0430 21:10:23.074568   16723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 21:10:23.074660   16723 main.go:141] libmachine: Decoding PEM data...
	I0430 21:10:23.074688   16723 main.go:141] libmachine: Parsing certificate...
	I0430 21:10:23.074766   16723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 21:10:23.074839   16723 main.go:141] libmachine: Decoding PEM data...
	I0430 21:10:23.074861   16723 main.go:141] libmachine: Parsing certificate...
	I0430 21:10:23.096629   16723 cli_runner.go:164] Run: docker network inspect force-systemd-env-157000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 21:10:23.145830   16723 cli_runner.go:211] docker network inspect force-systemd-env-157000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 21:10:23.145922   16723 network_create.go:281] running [docker network inspect force-systemd-env-157000] to gather additional debugging logs...
	I0430 21:10:23.145938   16723 cli_runner.go:164] Run: docker network inspect force-systemd-env-157000
	W0430 21:10:23.192846   16723 cli_runner.go:211] docker network inspect force-systemd-env-157000 returned with exit code 1
	I0430 21:10:23.192874   16723 network_create.go:284] error running [docker network inspect force-systemd-env-157000]: docker network inspect force-systemd-env-157000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-157000 not found
	I0430 21:10:23.192887   16723 network_create.go:286] output of [docker network inspect force-systemd-env-157000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-157000 not found
	
	** /stderr **
	I0430 21:10:23.193021   16723 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 21:10:23.242642   16723 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:10:23.244002   16723 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:10:23.245333   16723 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:10:23.246774   16723 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:10:23.248339   16723 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:10:23.249913   16723 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 21:10:23.250458   16723 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022a60f0}
	I0430 21:10:23.250478   16723 network_create.go:124] attempt to create docker network force-systemd-env-157000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0430 21:10:23.250576   16723 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-157000 force-systemd-env-157000
	I0430 21:10:23.335648   16723 network_create.go:108] docker network force-systemd-env-157000 192.168.103.0/24 created
	I0430 21:10:23.335687   16723 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-env-157000" container
	I0430 21:10:23.335806   16723 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 21:10:23.385669   16723 cli_runner.go:164] Run: docker volume create force-systemd-env-157000 --label name.minikube.sigs.k8s.io=force-systemd-env-157000 --label created_by.minikube.sigs.k8s.io=true
	I0430 21:10:23.434023   16723 oci.go:103] Successfully created a docker volume force-systemd-env-157000
	I0430 21:10:23.434152   16723 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-157000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-157000 --entrypoint /usr/bin/test -v force-systemd-env-157000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 21:10:23.682192   16723 oci.go:107] Successfully prepared a docker volume force-systemd-env-157000
	I0430 21:10:23.682229   16723 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 21:10:23.682252   16723 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 21:10:23.682364   16723 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-157000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir
	I0430 21:16:23.077978   16723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 21:16:23.078085   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:23.129004   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:23.129119   16723 retry.go:31] will retry after 263.186604ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:23.392820   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:23.444809   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:23.444936   16723 retry.go:31] will retry after 518.953952ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:23.966258   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:24.017216   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:24.017326   16723 retry.go:31] will retry after 588.073924ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:24.607763   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:24.658566   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	W0430 21:16:24.658672   16723 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	
	W0430 21:16:24.658692   16723 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:24.658759   16723 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 21:16:24.658824   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:24.707011   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:24.707108   16723 retry.go:31] will retry after 249.047451ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:24.958475   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:25.009912   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:25.010008   16723 retry.go:31] will retry after 385.529455ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:25.396206   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:25.448920   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:25.449015   16723 retry.go:31] will retry after 302.776967ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:25.752723   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:25.804281   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:25.804392   16723 retry.go:31] will retry after 617.784012ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:26.423579   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:26.475134   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	W0430 21:16:26.475235   16723 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	
	W0430 21:16:26.475249   16723 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:26.475257   16723 start.go:128] duration metric: took 6m3.422742233s to createHost
	I0430 21:16:26.475337   16723 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 21:16:26.475410   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:26.523538   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:26.523629   16723 retry.go:31] will retry after 298.394292ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:26.824439   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:26.875530   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:26.875634   16723 retry.go:31] will retry after 478.232552ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:27.355278   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:27.407465   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:27.407563   16723 retry.go:31] will retry after 292.547684ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:27.702532   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:27.754372   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:27.754461   16723 retry.go:31] will retry after 508.667774ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:28.265531   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:28.316294   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	W0430 21:16:28.316393   16723 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	
	W0430 21:16:28.316405   16723 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:28.316462   16723 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 21:16:28.316519   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:28.366235   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:28.366335   16723 retry.go:31] will retry after 341.945355ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:28.710629   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:28.760912   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:28.761005   16723 retry.go:31] will retry after 286.545382ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:29.048876   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:29.099573   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	I0430 21:16:29.099673   16723 retry.go:31] will retry after 446.926883ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:29.548997   16723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000
	W0430 21:16:29.599921   16723 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000 returned with exit code 1
	W0430 21:16:29.600043   16723 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	
	W0430 21:16:29.600060   16723 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-157000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-157000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	I0430 21:16:29.600071   16723 fix.go:56] duration metric: took 6m21.022926012s for fixHost
	I0430 21:16:29.600079   16723 start.go:83] releasing machines lock for "force-systemd-env-157000", held for 6m21.022980172s
	W0430 21:16:29.600154   16723 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-157000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-157000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0430 21:16:29.642806   16723 out.go:177] 
	W0430 21:16:29.663732   16723 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0430 21:16:29.663775   16723 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0430 21:16:29.663807   16723 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0430 21:16:29.705700   16723 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-157000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-157000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-157000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (200.604813ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-157000 host status: state: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-157000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-04-30 21:16:29.982808 -0700 PDT m=+6358.271055347
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-157000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-157000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-157000",
	        "Id": "0785f48a25c7de5e5381a5f2d51106e357ee6f2ecb7bbe955bbfae230f283c64",
	        "Created": "2024-05-01T04:10:23.29578919Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-157000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-157000 -n force-systemd-env-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-157000 -n force-systemd-env-157000: exit status 7 (149.45601ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 21:16:30.145002   17610 status.go:249] status error: host: state: unknown state "force-systemd-env-157000": docker container inspect force-systemd-env-157000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-157000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-157000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-157000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-157000
--- FAIL: TestForceSystemdEnv (750.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (893.02s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-677000 ssh -- ls /minikube-host
E0430 20:01:06.585993    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 20:01:41.241657    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 20:03:04.290843    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 20:06:06.611201    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 20:06:41.246118    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 20:11:06.607969    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 20:11:41.242751    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 20:14:09.655449    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-1-677000 ssh -- ls /minikube-host: signal: killed (14m52.580920126s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-1-677000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-677000
helpers_test.go:235: (dbg) docker inspect mount-start-1-677000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "04b98db6c1679da1bb0d06afa57c622cd927b6c548a5c161546c15cce8309850",
	        "Created": "2024-05-01T03:00:04.150611987Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 120868,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-01T03:00:04.316564831Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5a9f4571bee0c9e8a2bf2dbac4acb74ac80800f0d900766b498003a9a0b4faa9",
	        "ResolvConfPath": "/var/lib/docker/containers/04b98db6c1679da1bb0d06afa57c622cd927b6c548a5c161546c15cce8309850/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/04b98db6c1679da1bb0d06afa57c622cd927b6c548a5c161546c15cce8309850/hostname",
	        "HostsPath": "/var/lib/docker/containers/04b98db6c1679da1bb0d06afa57c622cd927b6c548a5c161546c15cce8309850/hosts",
	        "LogPath": "/var/lib/docker/containers/04b98db6c1679da1bb0d06afa57c622cd927b6c548a5c161546c15cce8309850/04b98db6c1679da1bb0d06afa57c622cd927b6c548a5c161546c15cce8309850-json.log",
	        "Name": "/mount-start-1-677000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-1-677000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-1-677000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4d8424ca991618ccc38089d69e997ba3f51016578f11a6be0b749cd09b2e1a90-init/diff:/var/lib/docker/overlay2/12491b0e936d2136eb7715d1a87c22c3e0aa24acbbfb72ff25108105fedcc08b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d8424ca991618ccc38089d69e997ba3f51016578f11a6be0b749cd09b2e1a90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d8424ca991618ccc38089d69e997ba3f51016578f11a6be0b749cd09b2e1a90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d8424ca991618ccc38089d69e997ba3f51016578f11a6be0b749cd09b2e1a90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-1-677000",
	                "Source": "/var/lib/docker/volumes/mount-start-1-677000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-1-677000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-1-677000",
	                "name.minikube.sigs.k8s.io": "mount-start-1-677000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f6106d966301d84dada8fd710f5bc0666128da533db264416596c70243e59fb3",
	            "SandboxKey": "/var/run/docker/netns/f6106d966301",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54556"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54552"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54553"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54554"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54555"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-1-677000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "7e9a4c4dd7b1df9e865563c90d072562bb8fcc9adf1aadccac92fc5118227e37",
	                    "EndpointID": "ecf54e1664365746e9e4c8bc74734bcdac65a76de769af6ce86d41016aaffe92",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "mount-start-1-677000",
	                        "04b98db6c167"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-677000 -n mount-start-1-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-677000 -n mount-start-1-677000: exit status 6 (378.265144ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:15:02.804764   14676 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-1-677000" does not appear in /Users/jenkins/minikube-integration/18779-7316/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-1-677000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountFirst (893.02s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (757.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-613000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0430 20:16:41.325809    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 20:19:44.369054    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 20:21:06.687439    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 20:21:41.325227    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 20:26:06.686738    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 20:26:41.324191    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-613000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m36.905360167s)

                                                
                                                
-- stdout --
	* [multinode-613000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-613000" primary control-plane node in "multinode-613000" cluster
	* Pulling base image v0.0.43-1714386659-18769 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-613000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:16:11.827087   14778 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:16:11.827343   14778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:16:11.827349   14778 out.go:304] Setting ErrFile to fd 2...
	I0430 20:16:11.827353   14778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:16:11.827515   14778 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:16:11.828985   14778 out.go:298] Setting JSON to false
	I0430 20:16:11.851987   14778 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4542,"bootTime":1714528829,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0430 20:16:11.852091   14778 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0430 20:16:11.874366   14778 out.go:177] * [multinode-613000] minikube v1.33.0 on Darwin 14.4.1
	I0430 20:16:11.916153   14778 out.go:177]   - MINIKUBE_LOCATION=18779
	I0430 20:16:11.916165   14778 notify.go:220] Checking for updates...
	I0430 20:16:11.957976   14778 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	I0430 20:16:11.979201   14778 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0430 20:16:12.000124   14778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0430 20:16:12.021082   14778 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	I0430 20:16:12.042315   14778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0430 20:16:12.064716   14778 driver.go:392] Setting default libvirt URI to qemu:///system
	I0430 20:16:12.119455   14778 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0430 20:16:12.119620   14778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 20:16:12.226828   14778 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:false NGoroutines:105 SystemTime:2024-05-01 03:16:12.216702914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 20:16:12.269686   14778 out.go:177] * Using the docker driver based on user configuration
	I0430 20:16:12.290612   14778 start.go:297] selected driver: docker
	I0430 20:16:12.290641   14778 start.go:901] validating driver "docker" against <nil>
	I0430 20:16:12.290669   14778 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0430 20:16:12.294564   14778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 20:16:12.399791   14778 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:86 OomKillDisable:false NGoroutines:105 SystemTime:2024-05-01 03:16:12.38973468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 20:16:12.399967   14778 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0430 20:16:12.400148   14778 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0430 20:16:12.421791   14778 out.go:177] * Using Docker Desktop driver with root privileges
	I0430 20:16:12.443808   14778 cni.go:84] Creating CNI manager for ""
	I0430 20:16:12.443839   14778 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0430 20:16:12.443851   14778 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0430 20:16:12.443944   14778 start.go:340] cluster config:
	{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 20:16:12.465775   14778 out.go:177] * Starting "multinode-613000" primary control-plane node in "multinode-613000" cluster
	I0430 20:16:12.486509   14778 cache.go:121] Beginning downloading kic base image for docker with docker
	I0430 20:16:12.507692   14778 out.go:177] * Pulling base image v0.0.43-1714386659-18769 ...
	I0430 20:16:12.549558   14778 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 20:16:12.549630   14778 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0430 20:16:12.549614   14778 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon
	I0430 20:16:12.549649   14778 cache.go:56] Caching tarball of preloaded images
	I0430 20:16:12.549888   14778 preload.go:173] Found /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0430 20:16:12.549911   14778 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0430 20:16:12.551471   14778 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/multinode-613000/config.json ...
	I0430 20:16:12.551627   14778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/multinode-613000/config.json: {Name:mk4899cb980ad5d8ea66f3210797043e00fea78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0430 20:16:12.601454   14778 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon, skipping pull
	I0430 20:16:12.601473   14778 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e exists in daemon, skipping load
	I0430 20:16:12.601494   14778 cache.go:194] Successfully downloaded all kic artifacts
	I0430 20:16:12.601547   14778 start.go:360] acquireMachinesLock for multinode-613000: {Name:mk4b1997cc63c071a5d4bd65917cfb80e5f3ad67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 20:16:12.601956   14778 start.go:364] duration metric: took 395.545µs to acquireMachinesLock for "multinode-613000"
	I0430 20:16:12.601990   14778 start.go:93] Provisioning new machine with config: &{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-613000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0430 20:16:12.602063   14778 start.go:125] createHost starting for "" (driver="docker")
	I0430 20:16:12.644609   14778 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0430 20:16:12.644971   14778 start.go:159] libmachine.API.Create for "multinode-613000" (driver="docker")
	I0430 20:16:12.645021   14778 client.go:168] LocalClient.Create starting
	I0430 20:16:12.645247   14778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 20:16:12.645349   14778 main.go:141] libmachine: Decoding PEM data...
	I0430 20:16:12.645383   14778 main.go:141] libmachine: Parsing certificate...
	I0430 20:16:12.645486   14778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 20:16:12.645566   14778 main.go:141] libmachine: Decoding PEM data...
	I0430 20:16:12.645584   14778 main.go:141] libmachine: Parsing certificate...
	I0430 20:16:12.646507   14778 cli_runner.go:164] Run: docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 20:16:12.695174   14778 cli_runner.go:211] docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 20:16:12.695271   14778 network_create.go:281] running [docker network inspect multinode-613000] to gather additional debugging logs...
	I0430 20:16:12.695285   14778 cli_runner.go:164] Run: docker network inspect multinode-613000
	W0430 20:16:12.742640   14778 cli_runner.go:211] docker network inspect multinode-613000 returned with exit code 1
	I0430 20:16:12.742666   14778 network_create.go:284] error running [docker network inspect multinode-613000]: docker network inspect multinode-613000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-613000 not found
	I0430 20:16:12.742677   14778 network_create.go:286] output of [docker network inspect multinode-613000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-613000 not found
	
	** /stderr **
	I0430 20:16:12.742815   14778 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 20:16:12.792574   14778 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:16:12.793959   14778 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:16:12.794319   14778 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022a81d0}
	I0430 20:16:12.794338   14778 network_create.go:124] attempt to create docker network multinode-613000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0430 20:16:12.794403   14778 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000
	W0430 20:16:12.842514   14778 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000 returned with exit code 1
	W0430 20:16:12.842546   14778 network_create.go:149] failed to create docker network multinode-613000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0430 20:16:12.842567   14778 network_create.go:116] failed to create docker network multinode-613000 192.168.67.0/24, will retry: subnet is taken
	I0430 20:16:12.844174   14778 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:16:12.844548   14778 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002251e00}
	I0430 20:16:12.844560   14778 network_create.go:124] attempt to create docker network multinode-613000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0430 20:16:12.844627   14778 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000
	I0430 20:16:12.929174   14778 network_create.go:108] docker network multinode-613000 192.168.76.0/24 created
	I0430 20:16:12.929224   14778 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-613000" container
	I0430 20:16:12.929341   14778 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 20:16:12.977882   14778 cli_runner.go:164] Run: docker volume create multinode-613000 --label name.minikube.sigs.k8s.io=multinode-613000 --label created_by.minikube.sigs.k8s.io=true
	I0430 20:16:13.027869   14778 oci.go:103] Successfully created a docker volume multinode-613000
	I0430 20:16:13.027988   14778 cli_runner.go:164] Run: docker run --rm --name multinode-613000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-613000 --entrypoint /usr/bin/test -v multinode-613000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 20:16:13.343106   14778 oci.go:107] Successfully prepared a docker volume multinode-613000
	I0430 20:16:13.343141   14778 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 20:16:13.343153   14778 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 20:16:13.343249   14778 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-613000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir
	I0430 20:22:12.731012   14778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 20:22:12.731164   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:22:12.784624   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:22:12.784751   14778 retry.go:31] will retry after 198.798652ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:12.984337   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:22:13.036207   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:22:13.036322   14778 retry.go:31] will retry after 397.424093ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:13.436150   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:22:13.487958   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:22:13.488063   14778 retry.go:31] will retry after 473.018934ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:13.963500   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:22:14.015980   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:22:14.016085   14778 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:22:14.016106   14778 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:14.016161   14778 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 20:22:14.016213   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:22:14.064066   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:22:14.064157   14778 retry.go:31] will retry after 170.085002ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:14.235314   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:22:14.286328   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:22:14.286420   14778 retry.go:31] will retry after 297.553059ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:14.585519   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:22:14.637164   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:22:14.637255   14778 retry.go:31] will retry after 558.31409ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:15.197956   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:22:15.247239   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:22:15.247331   14778 retry.go:31] will retry after 539.561693ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:15.789272   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:22:15.840181   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:22:15.840280   14778 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:22:15.840295   14778 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:15.840313   14778 start.go:128] duration metric: took 6m3.154638379s to createHost
	I0430 20:22:15.840319   14778 start.go:83] releasing machines lock for "multinode-613000", held for 6m3.154756618s
	W0430 20:22:15.840334   14778 start.go:713] error starting host: creating host: create host timed out in 360.000000 seconds
	I0430 20:22:15.840776   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:15.888989   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:22:15.889038   14778 delete.go:82] Unable to get host status for multinode-613000, assuming it has already been deleted: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	W0430 20:22:15.889121   14778 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0430 20:22:15.889133   14778 start.go:728] Will try again in 5 seconds ...
	I0430 20:22:20.890116   14778 start.go:360] acquireMachinesLock for multinode-613000: {Name:mk4b1997cc63c071a5d4bd65917cfb80e5f3ad67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 20:22:20.891271   14778 start.go:364] duration metric: took 347.242µs to acquireMachinesLock for "multinode-613000"
	I0430 20:22:20.891337   14778 start.go:96] Skipping create...Using existing machine configuration
	I0430 20:22:20.891354   14778 fix.go:54] fixHost starting: 
	I0430 20:22:20.891820   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:20.943574   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:22:20.943623   14778 fix.go:112] recreateIfNeeded on multinode-613000: state= err=unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:20.943641   14778 fix.go:117] machineExists: false. err=machine does not exist
	I0430 20:22:20.965549   14778 out.go:177] * docker "multinode-613000" container is missing, will recreate.
	I0430 20:22:21.006986   14778 delete.go:124] DEMOLISHING multinode-613000 ...
	I0430 20:22:21.007188   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:21.056266   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	W0430 20:22:21.056321   14778 stop.go:83] unable to get state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:21.056342   14778 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:21.056725   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:21.103925   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:22:21.103974   14778 delete.go:82] Unable to get host status for multinode-613000, assuming it has already been deleted: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:21.104062   14778 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-613000
	W0430 20:22:21.150779   14778 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-613000 returned with exit code 1
	I0430 20:22:21.150830   14778 kic.go:371] could not find the container multinode-613000 to remove it. will try anyways
	I0430 20:22:21.150900   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:21.197653   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	W0430 20:22:21.197694   14778 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:21.197778   14778 cli_runner.go:164] Run: docker exec --privileged -t multinode-613000 /bin/bash -c "sudo init 0"
	W0430 20:22:21.244859   14778 cli_runner.go:211] docker exec --privileged -t multinode-613000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0430 20:22:21.244888   14778 oci.go:650] error shutdown multinode-613000: docker exec --privileged -t multinode-613000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:22.247209   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:22.297546   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:22:22.297589   14778 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:22.297598   14778 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:22:22.297619   14778 retry.go:31] will retry after 748.639433ms: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:23.047535   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:23.101904   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:22:23.101946   14778 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:23.101958   14778 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:22:23.101980   14778 retry.go:31] will retry after 684.464839ms: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:23.787064   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:23.838367   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:22:23.838431   14778 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:23.838443   14778 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:22:23.838467   14778 retry.go:31] will retry after 1.240000666s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:25.080809   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:25.131553   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:22:25.131599   14778 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:25.131610   14778 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:22:25.131636   14778 retry.go:31] will retry after 1.367598645s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:26.501589   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:26.552315   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:22:26.552360   14778 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:26.552374   14778 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:22:26.552398   14778 retry.go:31] will retry after 2.253057163s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:28.807380   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:28.858580   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:22:28.858624   14778 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:28.858636   14778 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:22:28.858659   14778 retry.go:31] will retry after 3.626874302s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:32.487951   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:32.540680   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:22:32.540733   14778 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:32.540746   14778 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:22:32.540765   14778 retry.go:31] will retry after 8.534822502s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:41.077996   14778 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:22:41.129139   14778 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:22:41.129182   14778 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:22:41.129191   14778 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:22:41.129224   14778 oci.go:88] couldn't shut down multinode-613000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	 
	I0430 20:22:41.129302   14778 cli_runner.go:164] Run: docker rm -f -v multinode-613000
	I0430 20:22:41.176736   14778 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-613000
	W0430 20:22:41.224804   14778 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-613000 returned with exit code 1
	I0430 20:22:41.224921   14778 cli_runner.go:164] Run: docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 20:22:41.273022   14778 cli_runner.go:164] Run: docker network rm multinode-613000
	I0430 20:22:41.380632   14778 fix.go:124] Sleeping 1 second for extra luck!
	I0430 20:22:42.382833   14778 start.go:125] createHost starting for "" (driver="docker")
	I0430 20:22:42.404815   14778 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0430 20:22:42.404987   14778 start.go:159] libmachine.API.Create for "multinode-613000" (driver="docker")
	I0430 20:22:42.405018   14778 client.go:168] LocalClient.Create starting
	I0430 20:22:42.405243   14778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 20:22:42.405339   14778 main.go:141] libmachine: Decoding PEM data...
	I0430 20:22:42.405368   14778 main.go:141] libmachine: Parsing certificate...
	I0430 20:22:42.405454   14778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 20:22:42.405528   14778 main.go:141] libmachine: Decoding PEM data...
	I0430 20:22:42.405548   14778 main.go:141] libmachine: Parsing certificate...
	I0430 20:22:42.406835   14778 cli_runner.go:164] Run: docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 20:22:42.456105   14778 cli_runner.go:211] docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 20:22:42.456204   14778 network_create.go:281] running [docker network inspect multinode-613000] to gather additional debugging logs...
	I0430 20:22:42.456224   14778 cli_runner.go:164] Run: docker network inspect multinode-613000
	W0430 20:22:42.506430   14778 cli_runner.go:211] docker network inspect multinode-613000 returned with exit code 1
	I0430 20:22:42.506459   14778 network_create.go:284] error running [docker network inspect multinode-613000]: docker network inspect multinode-613000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-613000 not found
	I0430 20:22:42.506472   14778 network_create.go:286] output of [docker network inspect multinode-613000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-613000 not found
	
	** /stderr **
	I0430 20:22:42.506593   14778 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 20:22:42.557928   14778 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:22:42.559543   14778 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:22:42.561196   14778 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:22:42.562839   14778 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:22:42.563360   14778 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000880c20}
	I0430 20:22:42.563388   14778 network_create.go:124] attempt to create docker network multinode-613000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0430 20:22:42.563474   14778 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000
	I0430 20:22:42.647618   14778 network_create.go:108] docker network multinode-613000 192.168.85.0/24 created
	I0430 20:22:42.647651   14778 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-613000" container
	I0430 20:22:42.647750   14778 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 20:22:42.696636   14778 cli_runner.go:164] Run: docker volume create multinode-613000 --label name.minikube.sigs.k8s.io=multinode-613000 --label created_by.minikube.sigs.k8s.io=true
	I0430 20:22:42.744802   14778 oci.go:103] Successfully created a docker volume multinode-613000
	I0430 20:22:42.744928   14778 cli_runner.go:164] Run: docker run --rm --name multinode-613000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-613000 --entrypoint /usr/bin/test -v multinode-613000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 20:22:42.993436   14778 oci.go:107] Successfully prepared a docker volume multinode-613000
	I0430 20:22:42.993469   14778 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 20:22:42.993482   14778 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 20:22:42.993594   14778 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-613000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir
	I0430 20:28:42.405157   14778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 20:28:42.405284   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:42.457304   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:42.457419   14778 retry.go:31] will retry after 196.420126ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:42.654284   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:42.707123   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:42.707235   14778 retry.go:31] will retry after 243.499009ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:42.953156   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:43.007396   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:43.007508   14778 retry.go:31] will retry after 539.420324ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:43.547801   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:43.601259   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:28:43.601367   14778 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:28:43.601383   14778 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:43.601433   14778 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 20:28:43.601485   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:43.649320   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:43.649417   14778 retry.go:31] will retry after 261.149568ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:43.912946   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:43.965507   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:43.965604   14778 retry.go:31] will retry after 211.575997ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:44.179558   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:44.231338   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:44.231436   14778 retry.go:31] will retry after 809.951886ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:45.043735   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:45.096756   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:28:45.096859   14778 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:28:45.096876   14778 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:45.096889   14778 start.go:128] duration metric: took 6m2.715228461s to createHost
	I0430 20:28:45.096950   14778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 20:28:45.097005   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:45.146865   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:45.146957   14778 retry.go:31] will retry after 247.65903ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:45.397070   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:45.446866   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:45.446965   14778 retry.go:31] will retry after 270.549388ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:45.719914   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:45.773127   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:45.773217   14778 retry.go:31] will retry after 291.073567ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:46.066522   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:46.118615   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:46.118712   14778 retry.go:31] will retry after 443.882476ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:46.565045   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:46.616857   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:28:46.616953   14778 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:28:46.616973   14778 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:46.617022   14778 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 20:28:46.617082   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:46.665005   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:46.665101   14778 retry.go:31] will retry after 250.823475ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:46.916477   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:46.983516   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:46.983610   14778 retry.go:31] will retry after 447.204449ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:47.431951   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:47.485376   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:47.485470   14778 retry.go:31] will retry after 286.224543ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:47.773811   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:47.826395   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:28:47.826497   14778 retry.go:31] will retry after 705.011009ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:48.532767   14778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:28:48.585392   14778 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:28:48.585500   14778 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:28:48.585518   14778 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:28:48.585529   14778 fix.go:56] duration metric: took 6m27.69549805s for fixHost
	I0430 20:28:48.585536   14778 start.go:83] releasing machines lock for "multinode-613000", held for 6m27.695542011s
	W0430 20:28:48.585611   14778 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-613000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-613000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0430 20:28:48.628136   14778 out.go:177] 
	W0430 20:28:48.649059   14778 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0430 20:28:48.649110   14778 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0430 20:28:48.649132   14778 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0430 20:28:48.670085   14778 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-613000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "e54ba4e4529a74a1b055f3c55669d8eafe88c619a594c67d4c3396084012bcdb",
	        "Created": "2024-05-01T03:22:42.608370144Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (112.038369ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:28:48.931182   15051 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (757.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (77.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (111.697006ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-613000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- rollout status deployment/busybox: exit status 1 (107.419413ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.603918ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (113.823221ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (112.668249ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (113.431387ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (112.687913ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (112.966695ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.547002ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (113.231666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.192649ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.732116ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (107.18809ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- exec  -- nslookup kubernetes.io: exit status 1 (106.964463ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- exec  -- nslookup kubernetes.default: exit status 1 (108.075999ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (107.213753ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "e54ba4e4529a74a1b055f3c55669d8eafe88c619a594c67d4c3396084012bcdb",
	        "Created": "2024-05-01T03:22:42.608370144Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (112.032076ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:30:06.079835   15129 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (77.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-613000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (107.393123ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-613000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "e54ba4e4529a74a1b055f3c55669d8eafe88c619a594c67d4c3396084012bcdb",
	        "Created": "2024-05-01T03:22:42.608370144Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (113.009501ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:30:06.352525   15138 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-613000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-613000 -v 3 --alsologtostderr: exit status 80 (200.959416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:06.415874   15142 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:06.416151   15142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:06.416156   15142 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:06.416160   15142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:06.416329   15142 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:06.416659   15142 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:06.416953   15142 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:06.417329   15142 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:30:06.465709   15142 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:30:06.487339   15142 out.go:177] 
	W0430 20:30:06.509274   15142 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-613000 host status: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-613000 host status: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	I0430 20:30:06.531238   15142 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-613000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "e54ba4e4529a74a1b055f3c55669d8eafe88c619a594c67d4c3396084012bcdb",
	        "Created": "2024-05-01T03:22:42.608370144Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (113.035072ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:30:06.718437   15148 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-613000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-613000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (36.916604ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-613000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-613000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-613000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "e54ba4e4529a74a1b055f3c55669d8eafe88c619a594c67d4c3396084012bcdb",
	        "Created": "2024-05-01T03:22:42.608370144Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (112.872327ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:30:06.920830   15155 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-613000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-1-677000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-613000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-613000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"multinode-613000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "e54ba4e4529a74a1b055f3c55669d8eafe88c619a594c67d4c3396084012bcdb",
	        "Created": "2024-05-01T03:22:42.608370144Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (112.513424ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:30:07.270923   15167 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status --output json --alsologtostderr: exit status 7 (113.071275ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-613000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:07.334082   15171 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:07.334368   15171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:07.334374   15171 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:07.334378   15171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:07.334551   15171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:07.334728   15171 out.go:298] Setting JSON to true
	I0430 20:30:07.334749   15171 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:07.334790   15171 notify.go:220] Checking for updates...
	I0430 20:30:07.335036   15171 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:07.335053   15171 status.go:255] checking status of multinode-613000 ...
	I0430 20:30:07.335447   15171 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:30:07.383999   15171 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:30:07.384043   15171 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:30:07.384060   15171 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:30:07.384083   15171 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:30:07.384090   15171 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-613000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "e54ba4e4529a74a1b055f3c55669d8eafe88c619a594c67d4c3396084012bcdb",
	        "Created": "2024-05-01T03:22:42.608370144Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (112.995618ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:30:07.549664   15177 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 node stop m03: exit status 85 (160.999627ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-613000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status: exit status 7 (113.845641ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:30:07.825194   15183 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:30:07.825211   15183 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status --alsologtostderr: exit status 7 (112.051947ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:07.888348   15187 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:07.888644   15187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:07.888649   15187 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:07.888653   15187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:07.888823   15187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:07.889010   15187 out.go:298] Setting JSON to false
	I0430 20:30:07.889031   15187 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:07.889070   15187 notify.go:220] Checking for updates...
	I0430 20:30:07.889313   15187 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:07.889327   15187 status.go:255] checking status of multinode-613000 ...
	I0430 20:30:07.889682   15187 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:30:07.937205   15187 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:30:07.937284   15187 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:30:07.937301   15187 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:30:07.937322   15187 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:30:07.937329   15187 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-613000 status --alsologtostderr": multinode-613000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-613000 status --alsologtostderr": multinode-613000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-613000 status --alsologtostderr": multinode-613000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "e54ba4e4529a74a1b055f3c55669d8eafe88c619a594c67d4c3396084012bcdb",
	        "Created": "2024-05-01T03:22:42.608370144Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (113.283721ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:30:08.102118   15193 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (48.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 node start m03 -v=7 --alsologtostderr: exit status 85 (155.057336ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:08.165315   15197 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:08.165592   15197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:08.165597   15197 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:08.165601   15197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:08.165777   15197 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:08.166124   15197 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:08.166401   15197 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:08.187621   15197 out.go:177] 
	W0430 20:30:08.208855   15197 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0430 20:30:08.208880   15197 out.go:239] * 
	* 
	W0430 20:30:08.213469   15197 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0430 20:30:08.234718   15197 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0430 20:30:08.165315   15197 out.go:291] Setting OutFile to fd 1 ...
I0430 20:30:08.165592   15197 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0430 20:30:08.165597   15197 out.go:304] Setting ErrFile to fd 2...
I0430 20:30:08.165601   15197 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0430 20:30:08.165777   15197 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
I0430 20:30:08.166124   15197 mustload.go:65] Loading cluster: multinode-613000
I0430 20:30:08.166401   15197 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0430 20:30:08.187621   15197 out.go:177] 
W0430 20:30:08.208855   15197 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0430 20:30:08.208880   15197 out.go:239] * 
* 
W0430 20:30:08.213469   15197 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0430 20:30:08.234718   15197 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-613000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (113.959697ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:08.320697   15199 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:08.320966   15199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:08.320972   15199 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:08.320975   15199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:08.321154   15199 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:08.321319   15199 out.go:298] Setting JSON to false
	I0430 20:30:08.321343   15199 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:08.321385   15199 notify.go:220] Checking for updates...
	I0430 20:30:08.321625   15199 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:08.321640   15199 status.go:255] checking status of multinode-613000 ...
	I0430 20:30:08.322033   15199 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:30:08.371416   15199 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:30:08.371490   15199 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:30:08.371508   15199 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:30:08.371529   15199 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:30:08.371540   15199 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (117.95932ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:09.111237   15203 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:09.111833   15203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:09.111840   15203 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:09.111844   15203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:09.112219   15203 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:09.112623   15203 out.go:298] Setting JSON to false
	I0430 20:30:09.112651   15203 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:09.112697   15203 notify.go:220] Checking for updates...
	I0430 20:30:09.112901   15203 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:09.112916   15203 status.go:255] checking status of multinode-613000 ...
	I0430 20:30:09.113280   15203 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:30:09.162245   15203 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:30:09.162298   15203 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:30:09.162322   15203 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:30:09.162339   15203 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:30:09.162348   15203 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (120.849013ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:10.410073   15207 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:10.410442   15207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:10.410448   15207 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:10.410452   15207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:10.410641   15207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:10.410832   15207 out.go:298] Setting JSON to false
	I0430 20:30:10.410853   15207 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:10.410889   15207 notify.go:220] Checking for updates...
	I0430 20:30:10.412196   15207 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:10.412213   15207 status.go:255] checking status of multinode-613000 ...
	I0430 20:30:10.412604   15207 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:30:10.461178   15207 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:30:10.461222   15207 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:30:10.461243   15207 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:30:10.461261   15207 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:30:10.461268   15207 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (120.89348ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:13.593638   15211 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:13.593837   15211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:13.593843   15211 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:13.593847   15211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:13.594038   15211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:13.594209   15211 out.go:298] Setting JSON to false
	I0430 20:30:13.594232   15211 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:13.594271   15211 notify.go:220] Checking for updates...
	I0430 20:30:13.594510   15211 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:13.594523   15211 status.go:255] checking status of multinode-613000 ...
	I0430 20:30:13.594914   15211 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:30:13.645887   15211 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:30:13.645968   15211 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:30:13.645985   15211 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:30:13.646007   15211 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:30:13.646014   15211 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (117.766213ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:15.764246   15215 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:15.764446   15215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:15.764452   15215 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:15.764456   15215 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:15.764648   15215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:15.764827   15215 out.go:298] Setting JSON to false
	I0430 20:30:15.764849   15215 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:15.764890   15215 notify.go:220] Checking for updates...
	I0430 20:30:15.765131   15215 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:15.765144   15215 status.go:255] checking status of multinode-613000 ...
	I0430 20:30:15.765521   15215 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:30:15.815650   15215 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:30:15.815724   15215 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:30:15.815742   15215 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:30:15.815765   15215 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:30:15.815774   15215 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (116.149025ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:22.030795   15219 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:22.030992   15219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:22.030998   15219 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:22.031001   15219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:22.031184   15219 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:22.031373   15219 out.go:298] Setting JSON to false
	I0430 20:30:22.031399   15219 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:22.031440   15219 notify.go:220] Checking for updates...
	I0430 20:30:22.032629   15219 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:22.032647   15219 status.go:255] checking status of multinode-613000 ...
	I0430 20:30:22.033009   15219 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:30:22.080940   15219 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:30:22.080999   15219 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:30:22.081020   15219 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:30:22.081038   15219 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:30:22.081045   15219 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (120.69263ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:29.888131   15223 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:29.888316   15223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:29.888321   15223 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:29.888325   15223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:29.888502   15223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:29.888687   15223 out.go:298] Setting JSON to false
	I0430 20:30:29.888713   15223 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:29.888749   15223 notify.go:220] Checking for updates...
	I0430 20:30:29.890030   15223 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:29.890053   15223 status.go:255] checking status of multinode-613000 ...
	I0430 20:30:29.890410   15223 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:30:29.940963   15223 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:30:29.941012   15223 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:30:29.941035   15223 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:30:29.941052   15223 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:30:29.941060   15223 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (113.693084ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:37.283257   15227 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:37.283466   15227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:37.283472   15227 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:37.283475   15227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:37.283668   15227 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:37.283854   15227 out.go:298] Setting JSON to false
	I0430 20:30:37.283875   15227 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:37.283922   15227 notify.go:220] Checking for updates...
	I0430 20:30:37.285088   15227 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:37.285108   15227 status.go:255] checking status of multinode-613000 ...
	I0430 20:30:37.285473   15227 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:30:37.333063   15227 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:30:37.333128   15227 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:30:37.333151   15227 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:30:37.333175   15227 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:30:37.333182   15227 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
E0430 20:30:49.738330    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr: exit status 7 (118.904447ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:30:56.821835   15237 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:30:56.822067   15237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:56.822073   15237 out.go:304] Setting ErrFile to fd 2...
	I0430 20:30:56.822077   15237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:30:56.822264   15237 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:30:56.822451   15237 out.go:298] Setting JSON to false
	I0430 20:30:56.822473   15237 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:30:56.822508   15237 notify.go:220] Checking for updates...
	I0430 20:30:56.823877   15237 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:30:56.823900   15237 status.go:255] checking status of multinode-613000 ...
	I0430 20:30:56.824285   15237 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:30:56.872788   15237 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:30:56.872844   15237 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:30:56.872864   15237 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:30:56.872884   15237 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:30:56.872894   15237 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-613000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "e54ba4e4529a74a1b055f3c55669d8eafe88c619a594c67d4c3396084012bcdb",
	        "Created": "2024-05-01T03:22:42.608370144Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (114.026913ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:30:57.038626   15243 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (48.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (792.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-613000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-613000
E0430 20:31:06.686687    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-613000: exit status 82 (15.099505543s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-613000"  ...
	* Stopping node "multinode-613000"  ...
	* Stopping node "multinode-613000"  ...
	* Stopping node "multinode-613000"  ...
	* Stopping node "multinode-613000"  ...
	* Stopping node "multinode-613000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-613000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-613000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-613000 --wait=true -v=8 --alsologtostderr
E0430 20:31:41.428250    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 20:36:06.792541    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 20:36:24.474083    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 20:36:41.430034    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 20:41:06.793642    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 20:41:41.431698    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-613000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m57.086267034s)

                                                
                                                
-- stdout --
	* [multinode-613000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-613000" primary control-plane node in "multinode-613000" cluster
	* Pulling base image v0.0.43-1714386659-18769 ...
	* docker "multinode-613000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-613000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:31:12.265587   15263 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:31:12.265791   15263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:31:12.265797   15263 out.go:304] Setting ErrFile to fd 2...
	I0430 20:31:12.265800   15263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:31:12.265969   15263 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:31:12.267362   15263 out.go:298] Setting JSON to false
	I0430 20:31:12.289182   15263 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5443,"bootTime":1714528829,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0430 20:31:12.289277   15263 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0430 20:31:12.310801   15263 out.go:177] * [multinode-613000] minikube v1.33.0 on Darwin 14.4.1
	I0430 20:31:12.352865   15263 out.go:177]   - MINIKUBE_LOCATION=18779
	I0430 20:31:12.352913   15263 notify.go:220] Checking for updates...
	I0430 20:31:12.374814   15263 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	I0430 20:31:12.396850   15263 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0430 20:31:12.418720   15263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0430 20:31:12.440467   15263 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	I0430 20:31:12.461752   15263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0430 20:31:12.483491   15263 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:31:12.483675   15263 driver.go:392] Setting default libvirt URI to qemu:///system
	I0430 20:31:12.539343   15263 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0430 20:31:12.539508   15263 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 20:31:12.648783   15263 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:false NGoroutines:125 SystemTime:2024-05-01 03:31:12.637600857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 20:31:12.691584   15263 out.go:177] * Using the docker driver based on existing profile
	I0430 20:31:12.712637   15263 start.go:297] selected driver: docker
	I0430 20:31:12.712674   15263 start.go:901] validating driver "docker" against &{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-613000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 20:31:12.712792   15263 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0430 20:31:12.712996   15263 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 20:31:12.821395   15263 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:83 OomKillDisable:false NGoroutines:125 SystemTime:2024-05-01 03:31:12.811039348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 20:31:12.824415   15263 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0430 20:31:12.824486   15263 cni.go:84] Creating CNI manager for ""
	I0430 20:31:12.824497   15263 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0430 20:31:12.824581   15263 start.go:340] cluster config:
	{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 20:31:12.845638   15263 out.go:177] * Starting "multinode-613000" primary control-plane node in "multinode-613000" cluster
	I0430 20:31:12.866593   15263 cache.go:121] Beginning downloading kic base image for docker with docker
	I0430 20:31:12.887534   15263 out.go:177] * Pulling base image v0.0.43-1714386659-18769 ...
	I0430 20:31:12.929664   15263 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 20:31:12.929717   15263 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon
	I0430 20:31:12.929745   15263 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0430 20:31:12.929757   15263 cache.go:56] Caching tarball of preloaded images
	I0430 20:31:12.929972   15263 preload.go:173] Found /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0430 20:31:12.929995   15263 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0430 20:31:12.930933   15263 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/multinode-613000/config.json ...
	I0430 20:31:12.982385   15263 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon, skipping pull
	I0430 20:31:12.982406   15263 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e exists in daemon, skipping load
	I0430 20:31:12.982427   15263 cache.go:194] Successfully downloaded all kic artifacts
	I0430 20:31:12.982476   15263 start.go:360] acquireMachinesLock for multinode-613000: {Name:mk4b1997cc63c071a5d4bd65917cfb80e5f3ad67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 20:31:12.982579   15263 start.go:364] duration metric: took 83.914µs to acquireMachinesLock for "multinode-613000"
	I0430 20:31:12.982605   15263 start.go:96] Skipping create...Using existing machine configuration
	I0430 20:31:12.982616   15263 fix.go:54] fixHost starting: 
	I0430 20:31:12.982936   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:31:13.031899   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:31:13.031967   15263 fix.go:112] recreateIfNeeded on multinode-613000: state= err=unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:13.031990   15263 fix.go:117] machineExists: false. err=machine does not exist
	I0430 20:31:13.053839   15263 out.go:177] * docker "multinode-613000" container is missing, will recreate.
	I0430 20:31:13.095461   15263 delete.go:124] DEMOLISHING multinode-613000 ...
	I0430 20:31:13.095629   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:31:13.145219   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	W0430 20:31:13.145268   15263 stop.go:83] unable to get state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:13.145282   15263 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:13.145654   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:31:13.193602   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:31:13.193669   15263 delete.go:82] Unable to get host status for multinode-613000, assuming it has already been deleted: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:13.193751   15263 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-613000
	W0430 20:31:13.241117   15263 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-613000 returned with exit code 1
	I0430 20:31:13.241148   15263 kic.go:371] could not find the container multinode-613000 to remove it. will try anyways
	I0430 20:31:13.241210   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:31:13.288703   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	W0430 20:31:13.288750   15263 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:13.288826   15263 cli_runner.go:164] Run: docker exec --privileged -t multinode-613000 /bin/bash -c "sudo init 0"
	W0430 20:31:13.335801   15263 cli_runner.go:211] docker exec --privileged -t multinode-613000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0430 20:31:13.335837   15263 oci.go:650] error shutdown multinode-613000: docker exec --privileged -t multinode-613000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:14.338243   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:31:14.390652   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:31:14.390696   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:14.390714   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:31:14.390753   15263 retry.go:31] will retry after 473.477976ms: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:14.865655   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:31:14.967814   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:31:14.967856   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:14.967864   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:31:14.967888   15263 retry.go:31] will retry after 388.927581ms: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:15.358915   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:31:15.411980   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:31:15.412026   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:15.412034   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:31:15.412068   15263 retry.go:31] will retry after 903.522059ms: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:16.317937   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:31:16.370198   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:31:16.370246   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:16.370260   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:31:16.370283   15263 retry.go:31] will retry after 1.645495541s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:18.016317   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:31:18.066517   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:31:18.066562   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:18.066570   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:31:18.066594   15263 retry.go:31] will retry after 1.545544279s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:19.614291   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:31:19.665713   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:31:19.665755   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:19.665765   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:31:19.665790   15263 retry.go:31] will retry after 3.639363892s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:23.305597   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:31:23.354960   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:31:23.355006   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:23.355015   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:31:23.355041   15263 retry.go:31] will retry after 6.315278488s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:29.776008   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:31:29.829404   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:31:29.829447   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:31:29.829457   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:31:29.829501   15263 oci.go:88] couldn't shut down multinode-613000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	 
	I0430 20:31:29.829569   15263 cli_runner.go:164] Run: docker rm -f -v multinode-613000
	I0430 20:31:29.877502   15263 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-613000
	W0430 20:31:29.925432   15263 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-613000 returned with exit code 1
	I0430 20:31:29.925542   15263 cli_runner.go:164] Run: docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 20:31:29.973786   15263 cli_runner.go:164] Run: docker network rm multinode-613000
	I0430 20:31:30.129052   15263 fix.go:124] Sleeping 1 second for extra luck!
	I0430 20:31:31.131256   15263 start.go:125] createHost starting for "" (driver="docker")
	I0430 20:31:31.152249   15263 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0430 20:31:31.152457   15263 start.go:159] libmachine.API.Create for "multinode-613000" (driver="docker")
	I0430 20:31:31.152511   15263 client.go:168] LocalClient.Create starting
	I0430 20:31:31.152733   15263 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 20:31:31.152840   15263 main.go:141] libmachine: Decoding PEM data...
	I0430 20:31:31.152875   15263 main.go:141] libmachine: Parsing certificate...
	I0430 20:31:31.152976   15263 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 20:31:31.153053   15263 main.go:141] libmachine: Decoding PEM data...
	I0430 20:31:31.153069   15263 main.go:141] libmachine: Parsing certificate...
	I0430 20:31:31.153873   15263 cli_runner.go:164] Run: docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 20:31:31.206520   15263 cli_runner.go:211] docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 20:31:31.206608   15263 network_create.go:281] running [docker network inspect multinode-613000] to gather additional debugging logs...
	I0430 20:31:31.206624   15263 cli_runner.go:164] Run: docker network inspect multinode-613000
	W0430 20:31:31.255384   15263 cli_runner.go:211] docker network inspect multinode-613000 returned with exit code 1
	I0430 20:31:31.255418   15263 network_create.go:284] error running [docker network inspect multinode-613000]: docker network inspect multinode-613000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-613000 not found
	I0430 20:31:31.255427   15263 network_create.go:286] output of [docker network inspect multinode-613000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-613000 not found
	
	** /stderr **
	I0430 20:31:31.255534   15263 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 20:31:31.305268   15263 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:31:31.306904   15263 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:31:31.307273   15263 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002588cd0}
	I0430 20:31:31.307290   15263 network_create.go:124] attempt to create docker network multinode-613000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0430 20:31:31.307354   15263 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000
	W0430 20:31:31.356089   15263 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000 returned with exit code 1
	W0430 20:31:31.356125   15263 network_create.go:149] failed to create docker network multinode-613000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0430 20:31:31.356140   15263 network_create.go:116] failed to create docker network multinode-613000 192.168.67.0/24, will retry: subnet is taken
	I0430 20:31:31.357503   15263 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:31:31.357871   15263 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024ec430}
	I0430 20:31:31.357885   15263 network_create.go:124] attempt to create docker network multinode-613000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0430 20:31:31.357954   15263 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000
	I0430 20:31:31.442345   15263 network_create.go:108] docker network multinode-613000 192.168.76.0/24 created
	I0430 20:31:31.442381   15263 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-613000" container
	I0430 20:31:31.442475   15263 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 20:31:31.491718   15263 cli_runner.go:164] Run: docker volume create multinode-613000 --label name.minikube.sigs.k8s.io=multinode-613000 --label created_by.minikube.sigs.k8s.io=true
	I0430 20:31:31.539576   15263 oci.go:103] Successfully created a docker volume multinode-613000
	I0430 20:31:31.539690   15263 cli_runner.go:164] Run: docker run --rm --name multinode-613000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-613000 --entrypoint /usr/bin/test -v multinode-613000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 20:31:31.787977   15263 oci.go:107] Successfully prepared a docker volume multinode-613000
	I0430 20:31:31.788020   15263 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 20:31:31.788033   15263 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 20:31:31.788121   15263 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-613000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir
	I0430 20:37:31.156312   15263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 20:37:31.156453   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:31.209560   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:37:31.209678   15263 retry.go:31] will retry after 162.091539ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:31.374145   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:31.427868   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:37:31.427984   15263 retry.go:31] will retry after 548.996131ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:31.977423   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:32.029698   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:37:32.029803   15263 retry.go:31] will retry after 768.464622ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:32.800643   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:32.851587   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:37:32.851714   15263 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:37:32.851732   15263 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:32.851796   15263 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 20:37:32.851856   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:32.899250   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:37:32.899352   15263 retry.go:31] will retry after 239.606866ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:33.141390   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:33.194150   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:37:33.194261   15263 retry.go:31] will retry after 540.654529ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:33.737321   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:33.792031   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:37:33.792125   15263 retry.go:31] will retry after 514.117458ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:34.308043   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:34.359571   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:37:34.359677   15263 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:37:34.359695   15263 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:34.359712   15263 start.go:128] duration metric: took 6m3.226035627s to createHost
	I0430 20:37:34.359780   15263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 20:37:34.359832   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:34.407778   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:37:34.407869   15263 retry.go:31] will retry after 272.915181ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:34.683131   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:34.735384   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:37:34.735477   15263 retry.go:31] will retry after 231.973634ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:34.968265   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:35.072683   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:37:35.072797   15263 retry.go:31] will retry after 481.501619ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:35.556704   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:35.608440   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:37:35.608543   15263 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:37:35.608564   15263 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:35.608626   15263 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 20:37:35.608679   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:35.656578   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:37:35.656683   15263 retry.go:31] will retry after 322.165102ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:35.981243   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:36.034763   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:37:36.034861   15263 retry.go:31] will retry after 317.08543ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:36.354339   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:36.405589   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:37:36.405691   15263 retry.go:31] will retry after 605.505818ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:37.013643   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:37:37.063472   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:37:37.063586   15263 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:37:37.063601   15263 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:37.063623   15263 fix.go:56] duration metric: took 6m23.973963222s for fixHost
	I0430 20:37:37.063630   15263 start.go:83] releasing machines lock for "multinode-613000", held for 6m23.974002497s
	W0430 20:37:37.063646   15263 start.go:713] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0430 20:37:37.063707   15263 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0430 20:37:37.063713   15263 start.go:728] Will try again in 5 seconds ...
	I0430 20:37:42.064874   15263 start.go:360] acquireMachinesLock for multinode-613000: {Name:mk4b1997cc63c071a5d4bd65917cfb80e5f3ad67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 20:37:42.065093   15263 start.go:364] duration metric: took 158.523µs to acquireMachinesLock for "multinode-613000"
	I0430 20:37:42.065125   15263 start.go:96] Skipping create...Using existing machine configuration
	I0430 20:37:42.065141   15263 fix.go:54] fixHost starting: 
	I0430 20:37:42.065564   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:37:42.115793   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:37:42.115840   15263 fix.go:112] recreateIfNeeded on multinode-613000: state= err=unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:42.115856   15263 fix.go:117] machineExists: false. err=machine does not exist
	I0430 20:37:42.137799   15263 out.go:177] * docker "multinode-613000" container is missing, will recreate.
	I0430 20:37:42.180290   15263 delete.go:124] DEMOLISHING multinode-613000 ...
	I0430 20:37:42.180506   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:37:42.229424   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	W0430 20:37:42.229467   15263 stop.go:83] unable to get state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:42.229486   15263 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:42.229862   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:37:42.278104   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:37:42.278171   15263 delete.go:82] Unable to get host status for multinode-613000, assuming it has already been deleted: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:42.278267   15263 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-613000
	W0430 20:37:42.325079   15263 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-613000 returned with exit code 1
	I0430 20:37:42.325113   15263 kic.go:371] could not find the container multinode-613000 to remove it. will try anyways
	I0430 20:37:42.325184   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:37:42.373956   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	W0430 20:37:42.394877   15263 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:42.395016   15263 cli_runner.go:164] Run: docker exec --privileged -t multinode-613000 /bin/bash -c "sudo init 0"
	W0430 20:37:42.443777   15263 cli_runner.go:211] docker exec --privileged -t multinode-613000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0430 20:37:42.443808   15263 oci.go:650] error shutdown multinode-613000: docker exec --privileged -t multinode-613000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:43.446195   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:37:43.497755   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:37:43.497807   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:43.497820   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:37:43.497845   15263 retry.go:31] will retry after 701.369001ms: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:44.201603   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:37:44.252882   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:37:44.252926   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:44.252940   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:37:44.252964   15263 retry.go:31] will retry after 769.706404ms: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:45.023018   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:37:45.071975   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:37:45.072030   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:45.072037   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:37:45.072063   15263 retry.go:31] will retry after 1.223695803s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:46.297483   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:37:46.348876   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:37:46.348925   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:46.348937   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:37:46.348962   15263 retry.go:31] will retry after 1.27313486s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:47.624424   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:37:47.686907   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:37:47.686953   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:47.686963   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:37:47.686987   15263 retry.go:31] will retry after 2.832421438s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:50.521727   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:37:50.573442   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:37:50.573485   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:50.573495   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:37:50.573523   15263 retry.go:31] will retry after 2.82081548s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:53.394938   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:37:53.446612   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:37:53.446656   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:37:53.446666   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:37:53.446688   15263 retry.go:31] will retry after 8.493608986s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:38:01.940707   15263 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:38:01.991492   15263 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:38:01.991537   15263 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:38:01.991549   15263 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:38:01.991579   15263 oci.go:88] couldn't shut down multinode-613000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	 
	I0430 20:38:01.991652   15263 cli_runner.go:164] Run: docker rm -f -v multinode-613000
	I0430 20:38:02.041561   15263 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-613000
	W0430 20:38:02.088465   15263 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-613000 returned with exit code 1
	I0430 20:38:02.088570   15263 cli_runner.go:164] Run: docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 20:38:02.137010   15263 cli_runner.go:164] Run: docker network rm multinode-613000
	I0430 20:38:02.239091   15263 fix.go:124] Sleeping 1 second for extra luck!
	I0430 20:38:03.240451   15263 start.go:125] createHost starting for "" (driver="docker")
	I0430 20:38:03.262702   15263 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0430 20:38:03.262885   15263 start.go:159] libmachine.API.Create for "multinode-613000" (driver="docker")
	I0430 20:38:03.262911   15263 client.go:168] LocalClient.Create starting
	I0430 20:38:03.263133   15263 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 20:38:03.263234   15263 main.go:141] libmachine: Decoding PEM data...
	I0430 20:38:03.263262   15263 main.go:141] libmachine: Parsing certificate...
	I0430 20:38:03.263341   15263 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 20:38:03.263431   15263 main.go:141] libmachine: Decoding PEM data...
	I0430 20:38:03.263448   15263 main.go:141] libmachine: Parsing certificate...
	I0430 20:38:03.284713   15263 cli_runner.go:164] Run: docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 20:38:03.335242   15263 cli_runner.go:211] docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 20:38:03.335336   15263 network_create.go:281] running [docker network inspect multinode-613000] to gather additional debugging logs...
	I0430 20:38:03.335355   15263 cli_runner.go:164] Run: docker network inspect multinode-613000
	W0430 20:38:03.382565   15263 cli_runner.go:211] docker network inspect multinode-613000 returned with exit code 1
	I0430 20:38:03.382597   15263 network_create.go:284] error running [docker network inspect multinode-613000]: docker network inspect multinode-613000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-613000 not found
	I0430 20:38:03.382609   15263 network_create.go:286] output of [docker network inspect multinode-613000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-613000 not found
	
	** /stderr **
	I0430 20:38:03.382762   15263 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 20:38:03.432847   15263 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:38:03.434383   15263 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:38:03.435966   15263 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:38:03.437615   15263 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:38:03.438122   15263 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0009725b0}
	I0430 20:38:03.438141   15263 network_create.go:124] attempt to create docker network multinode-613000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0430 20:38:03.438232   15263 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000
	I0430 20:38:03.523049   15263 network_create.go:108] docker network multinode-613000 192.168.85.0/24 created
	I0430 20:38:03.523083   15263 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-613000" container
	I0430 20:38:03.523196   15263 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 20:38:03.572117   15263 cli_runner.go:164] Run: docker volume create multinode-613000 --label name.minikube.sigs.k8s.io=multinode-613000 --label created_by.minikube.sigs.k8s.io=true
	I0430 20:38:03.620111   15263 oci.go:103] Successfully created a docker volume multinode-613000
	I0430 20:38:03.620232   15263 cli_runner.go:164] Run: docker run --rm --name multinode-613000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-613000 --entrypoint /usr/bin/test -v multinode-613000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 20:38:03.862361   15263 oci.go:107] Successfully prepared a docker volume multinode-613000
	I0430 20:38:03.862397   15263 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 20:38:03.862410   15263 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 20:38:03.862520   15263 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-613000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir
	I0430 20:44:03.265702   15263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 20:44:03.265833   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:03.320030   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:03.320141   15263 retry.go:31] will retry after 272.756519ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:03.595350   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:03.646843   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:03.646958   15263 retry.go:31] will retry after 371.582863ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:04.019698   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:04.071859   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:04.071973   15263 retry.go:31] will retry after 578.472387ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:04.652802   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:04.705772   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:44:04.705879   15263 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:44:04.705900   15263 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:04.705965   15263 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 20:44:04.706025   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:04.753780   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:04.753887   15263 retry.go:31] will retry after 340.488253ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:05.096744   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:05.147471   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:05.147568   15263 retry.go:31] will retry after 336.93068ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:05.486918   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:05.540083   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:05.540181   15263 retry.go:31] will retry after 488.175204ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:06.030718   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:06.081742   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:44:06.081856   15263 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:44:06.081880   15263 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:06.081889   15263 start.go:128] duration metric: took 6m2.839435622s to createHost
	I0430 20:44:06.081951   15263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 20:44:06.082001   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:06.131064   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:06.131154   15263 retry.go:31] will retry after 363.159426ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:06.495635   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:06.548163   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:06.548267   15263 retry.go:31] will retry after 350.933691ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:06.900642   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:06.950277   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:06.950374   15263 retry.go:31] will retry after 449.617768ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:07.401523   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:07.453069   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:07.453163   15263 retry.go:31] will retry after 527.245128ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:07.982787   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:08.035217   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:44:08.035312   15263 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:44:08.035336   15263 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:08.035388   15263 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0430 20:44:08.035447   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:08.083073   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:08.083164   15263 retry.go:31] will retry after 259.981584ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:08.344668   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:08.397889   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:08.397984   15263 retry.go:31] will retry after 195.87396ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:08.596228   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:08.648770   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	I0430 20:44:08.648869   15263 retry.go:31] will retry after 527.398627ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:09.178002   15263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000
	W0430 20:44:09.230804   15263 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000 returned with exit code 1
	W0430 20:44:09.230913   15263 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	W0430 20:44:09.230932   15263 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-613000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-613000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:09.230947   15263 fix.go:56] duration metric: took 6m27.163721993s for fixHost
	I0430 20:44:09.230953   15263 start.go:83] releasing machines lock for "multinode-613000", held for 6m27.16376425s
	W0430 20:44:09.231028   15263 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-613000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-613000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0430 20:44:09.273377   15263 out.go:177] 
	W0430 20:44:09.294583   15263 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0430 20:44:09.294641   15263 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0430 20:44:09.294684   15263 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0430 20:44:09.337485   15263 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-613000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-613000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "0ac96f330a0ee9e6bb18e2b2c5105d6a660313a854bd397fb03e8c8e9cbc7e26",
	        "Created": "2024-05-01T03:38:03.483114521Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (113.288716ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:44:09.642938   15588 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (792.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 node delete m03: exit status 80 (199.146875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-613000 host status: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-613000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status --alsologtostderr: exit status 7 (113.392818ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:44:09.904750   15596 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:44:09.904957   15596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:44:09.904963   15596 out.go:304] Setting ErrFile to fd 2...
	I0430 20:44:09.904967   15596 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:44:09.905143   15596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:44:09.905325   15596 out.go:298] Setting JSON to false
	I0430 20:44:09.905347   15596 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:44:09.905387   15596 notify.go:220] Checking for updates...
	I0430 20:44:09.905629   15596 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:44:09.905643   15596 status.go:255] checking status of multinode-613000 ...
	I0430 20:44:09.907021   15596 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:09.955632   15596 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:09.955691   15596 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:44:09.955712   15596 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:44:09.955729   15596 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:44:09.955737   15596 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-613000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "0ac96f330a0ee9e6bb18e2b2c5105d6a660313a854bd397fb03e8c8e9cbc7e26",
	        "Created": "2024-05-01T03:38:03.483114521Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (112.627134ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:44:10.140809   15602 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (15.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 stop: exit status 82 (15.243634242s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-613000"  ...
	* Stopping node "multinode-613000"  ...
	* Stopping node "multinode-613000"  ...
	* Stopping node "multinode-613000"  ...
	* Stopping node "multinode-613000"  ...
	* Stopping node "multinode-613000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-613000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-613000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status: exit status 7 (113.395556ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:44:25.498281   15623 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:44:25.498293   15623 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-613000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-613000 status --alsologtostderr: exit status 7 (112.053064ms)

                                                
                                                
-- stdout --
	multinode-613000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:44:25.560321   15627 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:44:25.560518   15627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:44:25.560524   15627 out.go:304] Setting ErrFile to fd 2...
	I0430 20:44:25.560527   15627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:44:25.560689   15627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:44:25.560877   15627 out.go:298] Setting JSON to false
	I0430 20:44:25.560903   15627 mustload.go:65] Loading cluster: multinode-613000
	I0430 20:44:25.560938   15627 notify.go:220] Checking for updates...
	I0430 20:44:25.562150   15627 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:44:25.562171   15627 status.go:255] checking status of multinode-613000 ...
	I0430 20:44:25.562569   15627 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:25.610338   15627 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:25.610406   15627 status.go:330] multinode-613000 host status = "" (err=state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	)
	I0430 20:44:25.610427   15627 status.go:257] multinode-613000 status: &{Name:multinode-613000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0430 20:44:25.610453   15627 status.go:260] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	E0430 20:44:25.610460   15627 status.go:263] The "multinode-613000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-613000 status --alsologtostderr": multinode-613000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-613000 status --alsologtostderr": multinode-613000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "0ac96f330a0ee9e6bb18e2b2c5105d6a660313a854bd397fb03e8c8e9cbc7e26",
	        "Created": "2024-05-01T03:38:03.483114521Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (111.762779ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:44:25.774432   15633 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (15.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (106.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-613000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0430 20:46:06.795559    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-613000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m46.186001178s)

                                                
                                                
-- stdout --
	* [multinode-613000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-613000" primary control-plane node in "multinode-613000" cluster
	* Pulling base image v0.0.43-1714386659-18769 ...
	* docker "multinode-613000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 20:44:25.837340   15637 out.go:291] Setting OutFile to fd 1 ...
	I0430 20:44:25.837640   15637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:44:25.837646   15637 out.go:304] Setting ErrFile to fd 2...
	I0430 20:44:25.837650   15637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 20:44:25.837815   15637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 20:44:25.839265   15637 out.go:298] Setting JSON to false
	I0430 20:44:25.861328   15637 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6236,"bootTime":1714528829,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0430 20:44:25.861421   15637 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0430 20:44:25.884132   15637 out.go:177] * [multinode-613000] minikube v1.33.0 on Darwin 14.4.1
	I0430 20:44:25.926759   15637 out.go:177]   - MINIKUBE_LOCATION=18779
	I0430 20:44:25.926815   15637 notify.go:220] Checking for updates...
	I0430 20:44:25.948659   15637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	I0430 20:44:25.969452   15637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0430 20:44:25.990577   15637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0430 20:44:26.012526   15637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	I0430 20:44:26.033484   15637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0430 20:44:26.055339   15637 config.go:182] Loaded profile config "multinode-613000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 20:44:26.055913   15637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0430 20:44:26.110466   15637 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0430 20:44:26.110631   15637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 20:44:26.218822   15637 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:145 SystemTime:2024-05-01 03:44:26.207934393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 20:44:26.262331   15637 out.go:177] * Using the docker driver based on existing profile
	I0430 20:44:26.283563   15637 start.go:297] selected driver: docker
	I0430 20:44:26.283599   15637 start.go:901] validating driver "docker" against &{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-613000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 20:44:26.283723   15637 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0430 20:44:26.283923   15637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 20:44:26.392428   15637 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:89 OomKillDisable:false NGoroutines:145 SystemTime:2024-05-01 03:44:26.381484187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 20:44:26.395521   15637 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0430 20:44:26.395584   15637 cni.go:84] Creating CNI manager for ""
	I0430 20:44:26.395594   15637 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0430 20:44:26.395669   15637 start.go:340] cluster config:
	{Name:multinode-613000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-613000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 20:44:26.417072   15637 out.go:177] * Starting "multinode-613000" primary control-plane node in "multinode-613000" cluster
	I0430 20:44:26.438838   15637 cache.go:121] Beginning downloading kic base image for docker with docker
	I0430 20:44:26.460780   15637 out.go:177] * Pulling base image v0.0.43-1714386659-18769 ...
	I0430 20:44:26.481953   15637 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 20:44:26.482016   15637 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon
	I0430 20:44:26.482026   15637 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0430 20:44:26.482043   15637 cache.go:56] Caching tarball of preloaded images
	I0430 20:44:26.482298   15637 preload.go:173] Found /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0430 20:44:26.482321   15637 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0430 20:44:26.483372   15637 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/multinode-613000/config.json ...
	I0430 20:44:26.532222   15637 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon, skipping pull
	I0430 20:44:26.532238   15637 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e exists in daemon, skipping load
	I0430 20:44:26.532263   15637 cache.go:194] Successfully downloaded all kic artifacts
	I0430 20:44:26.532301   15637 start.go:360] acquireMachinesLock for multinode-613000: {Name:mk4b1997cc63c071a5d4bd65917cfb80e5f3ad67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0430 20:44:26.532526   15637 start.go:364] duration metric: took 203.348µs to acquireMachinesLock for "multinode-613000"
	I0430 20:44:26.532552   15637 start.go:96] Skipping create...Using existing machine configuration
	I0430 20:44:26.532564   15637 fix.go:54] fixHost starting: 
	I0430 20:44:26.532805   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:26.580945   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:26.580997   15637 fix.go:112] recreateIfNeeded on multinode-613000: state= err=unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:26.581020   15637 fix.go:117] machineExists: false. err=machine does not exist
	I0430 20:44:26.602772   15637 out.go:177] * docker "multinode-613000" container is missing, will recreate.
	I0430 20:44:26.644683   15637 delete.go:124] DEMOLISHING multinode-613000 ...
	I0430 20:44:26.644855   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:26.694456   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	W0430 20:44:26.694508   15637 stop.go:83] unable to get state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:26.694525   15637 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:26.694877   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:26.742730   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:26.742791   15637 delete.go:82] Unable to get host status for multinode-613000, assuming it has already been deleted: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:26.742865   15637 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-613000
	W0430 20:44:26.790934   15637 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-613000 returned with exit code 1
	I0430 20:44:26.790965   15637 kic.go:371] could not find the container multinode-613000 to remove it. will try anyways
	I0430 20:44:26.791037   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:26.838669   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	W0430 20:44:26.838715   15637 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:26.838804   15637 cli_runner.go:164] Run: docker exec --privileged -t multinode-613000 /bin/bash -c "sudo init 0"
	W0430 20:44:26.886734   15637 cli_runner.go:211] docker exec --privileged -t multinode-613000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0430 20:44:26.886765   15637 oci.go:650] error shutdown multinode-613000: docker exec --privileged -t multinode-613000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:27.887654   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:27.939518   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:27.939561   15637 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:27.939575   15637 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:44:27.939610   15637 retry.go:31] will retry after 645.170111ms: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:28.585615   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:28.638862   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:28.638906   15637 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:28.638915   15637 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:44:28.638937   15637 retry.go:31] will retry after 967.539123ms: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:29.607608   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:29.658961   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:29.659003   15637 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:29.659018   15637 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:44:29.659041   15637 retry.go:31] will retry after 1.573301976s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:31.233599   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:31.284735   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:31.284780   15637 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:31.284792   15637 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:44:31.284815   15637 retry.go:31] will retry after 1.410930198s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:32.697324   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:32.748561   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:32.748603   15637 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:32.748610   15637 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:44:32.748632   15637 retry.go:31] will retry after 2.798499896s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:35.549497   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:35.600183   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:35.600225   15637 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:35.600234   15637 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:44:35.600257   15637 retry.go:31] will retry after 2.08861404s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:37.691271   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:37.744372   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:37.744418   15637 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:37.744427   15637 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:44:37.744451   15637 retry.go:31] will retry after 3.593821376s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:41.340631   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:41.394815   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:41.394858   15637 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:41.394866   15637 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:44:41.394900   15637 retry.go:31] will retry after 6.20006175s: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:47.597376   15637 cli_runner.go:164] Run: docker container inspect multinode-613000 --format={{.State.Status}}
	W0430 20:44:47.649031   15637 cli_runner.go:211] docker container inspect multinode-613000 --format={{.State.Status}} returned with exit code 1
	I0430 20:44:47.649075   15637 oci.go:662] temporary error verifying shutdown: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	I0430 20:44:47.649087   15637 oci.go:664] temporary error: container multinode-613000 status is  but expect it to be exited
	I0430 20:44:47.649117   15637 oci.go:88] couldn't shut down multinode-613000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000
	 
	I0430 20:44:47.649197   15637 cli_runner.go:164] Run: docker rm -f -v multinode-613000
	I0430 20:44:47.697856   15637 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-613000
	W0430 20:44:47.746148   15637 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-613000 returned with exit code 1
	I0430 20:44:47.746246   15637 cli_runner.go:164] Run: docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 20:44:47.795189   15637 cli_runner.go:164] Run: docker network rm multinode-613000
	I0430 20:44:47.901381   15637 fix.go:124] Sleeping 1 second for extra luck!
	I0430 20:44:48.902790   15637 start.go:125] createHost starting for "" (driver="docker")
	I0430 20:44:48.928765   15637 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0430 20:44:48.928954   15637 start.go:159] libmachine.API.Create for "multinode-613000" (driver="docker")
	I0430 20:44:48.928997   15637 client.go:168] LocalClient.Create starting
	I0430 20:44:48.929221   15637 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/ca.pem
	I0430 20:44:48.929315   15637 main.go:141] libmachine: Decoding PEM data...
	I0430 20:44:48.929349   15637 main.go:141] libmachine: Parsing certificate...
	I0430 20:44:48.929446   15637 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18779-7316/.minikube/certs/cert.pem
	I0430 20:44:48.929526   15637 main.go:141] libmachine: Decoding PEM data...
	I0430 20:44:48.929540   15637 main.go:141] libmachine: Parsing certificate...
	I0430 20:44:48.930243   15637 cli_runner.go:164] Run: docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0430 20:44:48.981552   15637 cli_runner.go:211] docker network inspect multinode-613000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0430 20:44:48.981636   15637 network_create.go:281] running [docker network inspect multinode-613000] to gather additional debugging logs...
	I0430 20:44:48.981651   15637 cli_runner.go:164] Run: docker network inspect multinode-613000
	W0430 20:44:49.031933   15637 cli_runner.go:211] docker network inspect multinode-613000 returned with exit code 1
	I0430 20:44:49.031959   15637 network_create.go:284] error running [docker network inspect multinode-613000]: docker network inspect multinode-613000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-613000 not found
	I0430 20:44:49.031971   15637 network_create.go:286] output of [docker network inspect multinode-613000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-613000 not found
	
	** /stderr **
	I0430 20:44:49.032112   15637 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0430 20:44:49.081653   15637 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:44:49.083091   15637 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:44:49.083451   15637 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024a0910}
	I0430 20:44:49.083467   15637 network_create.go:124] attempt to create docker network multinode-613000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0430 20:44:49.083539   15637 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000
	W0430 20:44:49.132252   15637 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000 returned with exit code 1
	W0430 20:44:49.132289   15637 network_create.go:149] failed to create docker network multinode-613000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0430 20:44:49.132324   15637 network_create.go:116] failed to create docker network multinode-613000 192.168.67.0/24, will retry: subnet is taken
	I0430 20:44:49.133932   15637 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0430 20:44:49.134324   15637 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0025804f0}
	I0430 20:44:49.134336   15637 network_create.go:124] attempt to create docker network multinode-613000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0430 20:44:49.134409   15637 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-613000 multinode-613000
	I0430 20:44:49.220621   15637 network_create.go:108] docker network multinode-613000 192.168.76.0/24 created
	I0430 20:44:49.220713   15637 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-613000" container
	I0430 20:44:49.220821   15637 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0430 20:44:49.270505   15637 cli_runner.go:164] Run: docker volume create multinode-613000 --label name.minikube.sigs.k8s.io=multinode-613000 --label created_by.minikube.sigs.k8s.io=true
	I0430 20:44:49.318641   15637 oci.go:103] Successfully created a docker volume multinode-613000
	I0430 20:44:49.318752   15637 cli_runner.go:164] Run: docker run --rm --name multinode-613000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-613000 --entrypoint /usr/bin/test -v multinode-613000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -d /var/lib
	I0430 20:44:49.551585   15637 oci.go:107] Successfully prepared a docker volume multinode-613000
	I0430 20:44:49.551640   15637 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 20:44:49.551653   15637 kic.go:194] Starting extracting preloaded images to volume ...
	I0430 20:44:49.551763   15637 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-613000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-613000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-613000
helpers_test.go:235: (dbg) docker inspect multinode-613000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-613000",
	        "Id": "97b91e238aee3a34e9300d393e72416250ff07598d13712671bdf9af6919b101",
	        "Created": "2024-05-01T03:44:49.180946263Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-613000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-613000 -n multinode-613000: exit status 7 (112.843722ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:46:12.130579   15738 status.go:249] status error: host: state: unknown state "multinode-613000": docker container inspect multinode-613000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-613000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-613000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (106.36s)

                                                
                                    
x
+
TestScheduledStopUnix (300.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-385000 --memory=2048 --driver=docker 
E0430 20:51:06.798037    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 20:51:41.433951    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 20:53:04.480079    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-385000 --memory=2048 --driver=docker : signal: killed (5m0.004098672s)

                                                
                                                
-- stdout --
	* [scheduled-stop-385000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-385000" primary control-plane node in "scheduled-stop-385000" cluster
	* Pulling base image v0.0.43-1714386659-18769 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-385000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-385000" primary control-plane node in "scheduled-stop-385000" cluster
	* Pulling base image v0.0.43-1714386659-18769 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-04-30 20:53:26.026275 -0700 PDT m=+4974.450951774
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-385000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-385000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-385000",
	        "Id": "424fbcf4840e69eee1f04340cd8af87ee94a7e625245ddc6224b2633e83d3ccb",
	        "Created": "2024-05-01T03:48:27.175233884Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-385000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-385000 -n scheduled-stop-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-385000 -n scheduled-stop-385000: exit status 7 (113.743372ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:53:26.190294   16206 status.go:249] status error: host: state: unknown state "scheduled-stop-385000": docker container inspect scheduled-stop-385000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-385000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-385000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-385000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-385000
--- FAIL: TestScheduledStopUnix (300.89s)

                                                
                                    
x
+
TestSkaffold (300.89s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2112277078 version
skaffold_test.go:59: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2112277078 version: (1.457637024s)
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-258000 --memory=2600 --driver=docker 
E0430 20:56:06.799637    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 20:56:41.435835    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-258000 --memory=2600 --driver=docker : signal: killed (4m56.702534348s)

                                                
                                                
-- stdout --
	* [skaffold-258000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-258000" primary control-plane node in "skaffold-258000" cluster
	* Pulling base image v0.0.43-1714386659-18769 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-258000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-258000" primary control-plane node in "skaffold-258000" cluster
	* Pulling base image v0.0.43-1714386659-18769 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-04-30 20:58:26.920295 -0700 PDT m=+5275.343351822
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-258000
helpers_test.go:235: (dbg) docker inspect skaffold-258000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-258000",
	        "Id": "8b67295ec0bd93caa4bc66d4407ac10efc35c9c3a7ee430cd45cd1eac6d3c8f1",
	        "Created": "2024-05-01T03:53:31.349091759Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-258000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-258000 -n skaffold-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-258000 -n skaffold-258000: exit status 7 (112.562443ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0430 20:58:27.084654   16336 status.go:249] status error: host: state: unknown state "skaffold-258000": docker container inspect skaffold-258000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-258000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-258000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-258000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-258000
--- FAIL: TestSkaffold (300.89s)

                                                
                                    
x
+
TestInsufficientStorage (300.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-867000 --memory=2048 --output=json --wait=true --driver=docker 
E0430 21:01:06.801260    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 21:01:41.437862    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-867000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.003095518s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"feeea076-b4ce-4358-bd55-0eb6ccf2478f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-867000] minikube v1.33.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b38d343-ff95-4677-be66-3a8c32c53377","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18779"}}
	{"specversion":"1.0","id":"13b824d4-1d9f-494c-a4da-4c0e3dad0a73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig"}}
	{"specversion":"1.0","id":"3d073dac-d96d-41ce-99e8-f75f76219170","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"a62dd86f-c668-44e3-a384-f9c085e3aa76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9f55db7c-28df-48e3-8e1f-92e5d99314a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube"}}
	{"specversion":"1.0","id":"64a6ac17-6daf-4b3c-aee7-fedff72a0789","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5c7404dd-7819-481a-83de-06a2a0c5b93e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"bfde345e-f602-4b04-b66d-d4420ceda153","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ff61c577-7c4c-4e2b-8723-1a397ed2903f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"302d8b18-9441-4ca7-84f1-6d5ec7c0a2ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"f71882d7-62a0-464f-bc77-6b8f7e966121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-867000\" primary control-plane node in \"insufficient-storage-867000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec42c3fc-5fb5-49f0-9b25-87baf443209c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1714386659-18769 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"01670140-3190-482a-b00a-03937b2b5fb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-867000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-867000 --output=json --layout=cluster: context deadline exceeded (943ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-867000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-867000
--- FAIL: TestInsufficientStorage (300.73s)

                                                
                                    

Test pass (162/201)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.39
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.63
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.30.0/json-events 11.55
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.3
18 TestDownloadOnly/v1.30.0/DeleteAll 0.63
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.37
20 TestDownloadOnlyKic 2
21 TestBinaryMirror 1.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
27 TestAddons/Setup 298.06
31 TestAddons/parallel/InspektorGadget 10.76
32 TestAddons/parallel/MetricsServer 5.88
33 TestAddons/parallel/HelmTiller 11.96
35 TestAddons/parallel/CSI 68.56
36 TestAddons/parallel/Headlamp 12.22
37 TestAddons/parallel/CloudSpanner 5.62
38 TestAddons/parallel/LocalPath 54.9
39 TestAddons/parallel/NvidiaDevicePlugin 5.6
40 TestAddons/parallel/Yakd 5.01
43 TestAddons/serial/GCPAuth/Namespaces 0.1
44 TestAddons/StoppedEnableDisable 11.83
52 TestHyperKitDriverInstallOrUpdate 6.38
55 TestErrorSpam/setup 21.07
56 TestErrorSpam/start 2.07
57 TestErrorSpam/status 1.21
58 TestErrorSpam/pause 1.67
59 TestErrorSpam/unpause 1.75
60 TestErrorSpam/stop 11.42
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 35.28
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 33.7
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 10.75
72 TestFunctional/serial/CacheCmd/cache/add_local 1.63
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
74 TestFunctional/serial/CacheCmd/cache/list 0.09
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.4
76 TestFunctional/serial/CacheCmd/cache/cache_reload 3.59
77 TestFunctional/serial/CacheCmd/cache/delete 0.18
78 TestFunctional/serial/MinikubeKubectlCmd 0.97
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.36
80 TestFunctional/serial/ExtraConfig 36.21
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 2.95
83 TestFunctional/serial/LogsFileCmd 3.04
84 TestFunctional/serial/InvalidService 4.35
86 TestFunctional/parallel/ConfigCmd 0.59
87 TestFunctional/parallel/DashboardCmd 12.86
88 TestFunctional/parallel/DryRun 1.51
89 TestFunctional/parallel/InternationalLanguage 0.66
90 TestFunctional/parallel/StatusCmd 1.17
95 TestFunctional/parallel/AddonsCmd 0.27
96 TestFunctional/parallel/PersistentVolumeClaim 25.06
98 TestFunctional/parallel/SSHCmd 0.75
99 TestFunctional/parallel/CpCmd 2.77
100 TestFunctional/parallel/MySQL 32.13
101 TestFunctional/parallel/FileSync 0.41
102 TestFunctional/parallel/CertSync 2.47
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
110 TestFunctional/parallel/License 0.59
111 TestFunctional/parallel/Version/short 0.11
112 TestFunctional/parallel/Version/components 0.58
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.37
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
117 TestFunctional/parallel/ImageCommands/ImageBuild 5.63
118 TestFunctional/parallel/ImageCommands/Setup 5.38
119 TestFunctional/parallel/DockerEnv/bash 1.76
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.33
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.37
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.86
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.77
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.97
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.36
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.47
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.23
130 TestFunctional/parallel/ServiceCmd/DeployApp 104.13
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 71.14
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
142 TestFunctional/parallel/ServiceCmd/List 0.6
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
144 TestFunctional/parallel/ServiceCmd/HTTPS 15
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
146 TestFunctional/parallel/ProfileCmd/profile_list 0.53
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
148 TestFunctional/parallel/MountCmd/any-port 11.47
149 TestFunctional/parallel/ServiceCmd/Format 15
150 TestFunctional/parallel/MountCmd/specific-port 2.22
151 TestFunctional/parallel/MountCmd/VerifyCleanup 2.64
152 TestFunctional/parallel/ServiceCmd/URL 15
153 TestFunctional/delete_addon-resizer_images 0.13
154 TestFunctional/delete_my-image_image 0.05
155 TestFunctional/delete_minikube_cached_images 0.07
159 TestMultiControlPlane/serial/StartCluster 100.1
160 TestMultiControlPlane/serial/DeployApp 10.77
161 TestMultiControlPlane/serial/PingHostFromPods 1.46
162 TestMultiControlPlane/serial/AddWorkerNode 19.28
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.14
165 TestMultiControlPlane/serial/CopyFile 24.71
166 TestMultiControlPlane/serial/StopSecondaryNode 11.89
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
168 TestMultiControlPlane/serial/RestartSecondaryNode 60.97
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.12
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 230.61
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.69
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
173 TestMultiControlPlane/serial/StopCluster 32.75
174 TestMultiControlPlane/serial/RestartCluster 83.88
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
176 TestMultiControlPlane/serial/AddSecondaryNode 37.54
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.13
180 TestImageBuild/serial/Setup 19.71
181 TestImageBuild/serial/NormalBuild 4.02
182 TestImageBuild/serial/BuildWithBuildArg 1.55
183 TestImageBuild/serial/BuildWithDockerIgnore 1.25
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.37
188 TestJSONOutput/start/Command 74.61
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.56
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.58
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.79
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.78
213 TestKicCustomNetwork/create_custom_network 22.99
214 TestKicCustomNetwork/use_default_bridge_network 22.11
215 TestKicExistingNetwork 21.76
216 TestKicCustomSubnet 22.24
217 TestKicStaticIP 23.22
218 TestMainNoArgs 0.09
219 TestMinikubeProfile 47.13
222 TestMountStart/serial/StartWithMountFirst 7.42
242 TestPreload 133
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.21
264 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.7
x
+
TestDownloadOnly/v1.20.0/json-events (18.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-415000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-415000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (18.389911249s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (18.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-415000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-415000: exit status 85 (303.759625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-415000 | jenkins | v1.33.0 | 30 Apr 24 19:30 PDT |          |
	|         | -p download-only-415000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/30 19:30:31
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0430 19:30:31.453205    7856 out.go:291] Setting OutFile to fd 1 ...
	I0430 19:30:31.453386    7856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 19:30:31.453392    7856 out.go:304] Setting ErrFile to fd 2...
	I0430 19:30:31.453396    7856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 19:30:31.453578    7856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	W0430 19:30:31.453677    7856 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18779-7316/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18779-7316/.minikube/config/config.json: no such file or directory
	I0430 19:30:31.455383    7856 out.go:298] Setting JSON to true
	I0430 19:30:31.477979    7856 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1802,"bootTime":1714528829,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0430 19:30:31.478077    7856 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0430 19:30:31.499654    7856 out.go:97] [download-only-415000] minikube v1.33.0 on Darwin 14.4.1
	I0430 19:30:31.521334    7856 out.go:169] MINIKUBE_LOCATION=18779
	I0430 19:30:31.499960    7856 notify.go:220] Checking for updates...
	W0430 19:30:31.499972    7856 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball: no such file or directory
	I0430 19:30:31.570099    7856 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	I0430 19:30:31.591432    7856 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0430 19:30:31.612801    7856 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0430 19:30:31.634481    7856 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	W0430 19:30:31.678541    7856 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0430 19:30:31.679138    7856 driver.go:392] Setting default libvirt URI to qemu:///system
	I0430 19:30:31.733444    7856 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0430 19:30:31.733576    7856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 19:30:31.844656    7856 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:97 SystemTime:2024-05-01 02:30:31.833790682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 19:30:31.866214    7856 out.go:97] Using the docker driver based on user configuration
	I0430 19:30:31.866304    7856 start.go:297] selected driver: docker
	I0430 19:30:31.866325    7856 start.go:901] validating driver "docker" against <nil>
	I0430 19:30:31.866543    7856 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 19:30:31.979717    7856 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:97 SystemTime:2024-05-01 02:30:31.96997171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-
g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev Sc
hemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/doc
ker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 19:30:31.979892    7856 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0430 19:30:31.982932    7856 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0430 19:30:31.983092    7856 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0430 19:30:32.004761    7856 out.go:169] Using Docker Desktop driver with root privileges
	I0430 19:30:32.025773    7856 cni.go:84] Creating CNI manager for ""
	I0430 19:30:32.025816    7856 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0430 19:30:32.025957    7856 start.go:340] cluster config:
	{Name:download-only-415000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-415000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 19:30:32.047388    7856 out.go:97] Starting "download-only-415000" primary control-plane node in "download-only-415000" cluster
	I0430 19:30:32.047431    7856 cache.go:121] Beginning downloading kic base image for docker with docker
	I0430 19:30:32.068615    7856 out.go:97] Pulling base image v0.0.43-1714386659-18769 ...
	I0430 19:30:32.068711    7856 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0430 19:30:32.068811    7856 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon
	I0430 19:30:32.118296    7856 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e to local cache
	I0430 19:30:32.118537    7856 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local cache directory
	I0430 19:30:32.118675    7856 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e to local cache
	I0430 19:30:32.124638    7856 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0430 19:30:32.124656    7856 cache.go:56] Caching tarball of preloaded images
	I0430 19:30:32.124806    7856 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0430 19:30:32.146645    7856 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0430 19:30:32.146694    7856 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0430 19:30:32.226417    7856 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0430 19:30:36.737982    7856 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0430 19:30:36.738150    7856 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0430 19:30:37.288376    7856 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0430 19:30:37.288595    7856 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/download-only-415000/config.json ...
	I0430 19:30:37.288618    7856 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/download-only-415000/config.json: {Name:mk07f3823a0a3893afed52fefda47c60ee126724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0430 19:30:37.288900    7856 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0430 19:30:37.289192    7856 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-415000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-415000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-415000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (11.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-976000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-976000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=docker : (11.546522382s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (11.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-976000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-976000: exit status 85 (302.790738ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-415000 | jenkins | v1.33.0 | 30 Apr 24 19:30 PDT |                     |
	|         | -p download-only-415000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 30 Apr 24 19:30 PDT | 30 Apr 24 19:30 PDT |
	| delete  | -p download-only-415000        | download-only-415000 | jenkins | v1.33.0 | 30 Apr 24 19:30 PDT | 30 Apr 24 19:30 PDT |
	| start   | -o=json --download-only        | download-only-976000 | jenkins | v1.33.0 | 30 Apr 24 19:30 PDT |                     |
	|         | -p download-only-976000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/30 19:30:51
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0430 19:30:51.156222    7926 out.go:291] Setting OutFile to fd 1 ...
	I0430 19:30:51.156396    7926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 19:30:51.156402    7926 out.go:304] Setting ErrFile to fd 2...
	I0430 19:30:51.156405    7926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 19:30:51.156590    7926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 19:30:51.158015    7926 out.go:298] Setting JSON to true
	I0430 19:30:51.179879    7926 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1822,"bootTime":1714528829,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0430 19:30:51.179979    7926 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0430 19:30:51.201751    7926 out.go:97] [download-only-976000] minikube v1.33.0 on Darwin 14.4.1
	I0430 19:30:51.223633    7926 out.go:169] MINIKUBE_LOCATION=18779
	I0430 19:30:51.201881    7926 notify.go:220] Checking for updates...
	I0430 19:30:51.266384    7926 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	I0430 19:30:51.287756    7926 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0430 19:30:51.308778    7926 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0430 19:30:51.329540    7926 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	W0430 19:30:51.371854    7926 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0430 19:30:51.372390    7926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0430 19:30:51.426962    7926 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0430 19:30:51.427106    7926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 19:30:51.535837    7926 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:99 SystemTime:2024-05-01 02:30:51.525331966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 19:30:51.557343    7926 out.go:97] Using the docker driver based on user configuration
	I0430 19:30:51.557435    7926 start.go:297] selected driver: docker
	I0430 19:30:51.557451    7926 start.go:901] validating driver "docker" against <nil>
	I0430 19:30:51.557601    7926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 19:30:51.669454    7926 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:99 SystemTime:2024-05-01 02:30:51.659333597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0
-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev S
chemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/do
cker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 19:30:51.669641    7926 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0430 19:30:51.672543    7926 start_flags.go:393] Using suggested 5875MB memory alloc based on sys=32768MB, container=5923MB
	I0430 19:30:51.672683    7926 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0430 19:30:51.694301    7926 out.go:169] Using Docker Desktop driver with root privileges
	I0430 19:30:51.716098    7926 cni.go:84] Creating CNI manager for ""
	I0430 19:30:51.716141    7926 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0430 19:30:51.716158    7926 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0430 19:30:51.716292    7926 start.go:340] cluster config:
	{Name:download-only-976000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:5875 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-976000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 19:30:51.737869    7926 out.go:97] Starting "download-only-976000" primary control-plane node in "download-only-976000" cluster
	I0430 19:30:51.737911    7926 cache.go:121] Beginning downloading kic base image for docker with docker
	I0430 19:30:51.759051    7926 out.go:97] Pulling base image v0.0.43-1714386659-18769 ...
	I0430 19:30:51.759115    7926 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 19:30:51.759206    7926 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local docker daemon
	I0430 19:30:51.808072    7926 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e to local cache
	I0430 19:30:51.808326    7926 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local cache directory
	I0430 19:30:51.808344    7926 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e in local cache directory, skipping pull
	I0430 19:30:51.808351    7926 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e exists in cache, skipping pull
	I0430 19:30:51.808359    7926 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e as a tarball
	I0430 19:30:51.816888    7926 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0430 19:30:51.816923    7926 cache.go:56] Caching tarball of preloaded images
	I0430 19:30:51.817099    7926 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 19:30:51.838966    7926 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0430 19:30:51.838996    7926 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0430 19:30:51.931614    7926 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0430 19:30:55.432370    7926 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0430 19:30:55.432585    7926 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0430 19:30:55.920357    7926 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0430 19:30:55.920616    7926 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/download-only-976000/config.json ...
	I0430 19:30:55.920642    7926 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/download-only-976000/config.json: {Name:mkaeb0ad2f3069a44dc45f5d07c1567b823eb3dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0430 19:30:55.920935    7926 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0430 19:30:55.921140    7926 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18779-7316/.minikube/cache/darwin/amd64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-976000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-976000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-976000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (2s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-627000 --alsologtostderr --driver=docker 
aaa_download_only_test.go:232: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-627000 --alsologtostderr --driver=docker : (1.048727543s)
helpers_test.go:175: Cleaning up "download-docker-627000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-627000
--- PASS: TestDownloadOnlyKic (2.00s)

                                                
                                    
x
+
TestBinaryMirror (1.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-000000 --alsologtostderr --binary-mirror http://127.0.0.1:52273 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-000000 --alsologtostderr --binary-mirror http://127.0.0.1:52273 --driver=docker : (1.004421998s)
helpers_test.go:175: Cleaning up "binary-mirror-000000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-000000
--- PASS: TestBinaryMirror (1.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-257000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-257000: exit status 85 (196.538703ms)

                                                
                                                
-- stdout --
	* Profile "addons-257000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-257000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-257000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-257000: exit status 85 (217.913557ms)

                                                
                                                
-- stdout --
	* Profile "addons-257000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-257000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (298.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-257000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-257000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m58.062619107s)
--- PASS: TestAddons/Setup (298.06s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2brf7" [d2fa76be-7db3-4fbf-bf86-619049aebc2a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003456716s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-257000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-257000: (5.752224494s)
--- PASS: TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.318658ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-dvzx9" [8c19495b-3b8a-47fa-b513-f4b3f4c2cc05] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004333907s
addons_test.go:415: (dbg) Run:  kubectl --context addons-257000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.88s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 2.251364ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-pqrlw" [db6221b2-51cd-4456-ac67-652012fd94a3] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004324432s
addons_test.go:473: (dbg) Run:  kubectl --context addons-257000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-257000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.290777773s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.96s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 15.8205ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-257000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-257000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [99077fb7-7bba-45b0-9254-21a5092cf9e2] Pending
helpers_test.go:344: "task-pv-pod" [99077fb7-7bba-45b0-9254-21a5092cf9e2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [99077fb7-7bba-45b0-9254-21a5092cf9e2] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.006519499s
addons_test.go:584: (dbg) Run:  kubectl --context addons-257000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-257000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-257000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-257000 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-257000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-257000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-257000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5752356f-2457-4758-9a28-9056af77a50b] Pending
helpers_test.go:344: "task-pv-pod-restore" [5752356f-2457-4758-9a28-9056af77a50b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5752356f-2457-4758-9a28-9056af77a50b] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005493735s
addons_test.go:626: (dbg) Run:  kubectl --context addons-257000 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-257000 delete pod task-pv-pod-restore: (1.024856956s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-257000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-257000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-257000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.036762365s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-257000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-257000 --alsologtostderr -v=1: (1.217166121s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-8dgjh" [fdb518b2-5cff-4452-884c-f4792291f490] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-8dgjh" [fdb518b2-5cff-4452-884c-f4792291f490] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00515778s
--- PASS: TestAddons/parallel/Headlamp (12.22s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-kz6dq" [3cee6284-dc4a-4e25-818d-859cc1827368] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005151364s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-257000
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.9s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-257000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-257000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [da7b5377-df67-43d3-8159-b6cfa5472d1c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [da7b5377-df67-43d3-8159-b6cfa5472d1c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [da7b5377-df67-43d3-8159-b6cfa5472d1c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004249141s
addons_test.go:891: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 ssh "cat /opt/local-path-provisioner/pvc-67b9f7fe-b288-4404-899c-316ea1fd851b_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-257000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-257000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-257000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.012593236s)
--- PASS: TestAddons/parallel/LocalPath (54.90s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rnpvq" [3388bdcc-4a87-43e7-8d16-f74eb4dd1d54] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00544834s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-257000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-6z4s4" [fa339475-a19a-494d-bd3a-e6f206475ee3] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005594087s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-257000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-257000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.83s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-257000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-257000: (11.106274179s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-257000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-257000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-257000
--- PASS: TestAddons/StoppedEnableDisable (11.83s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.38s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.38s)

                                                
                                    
x
+
TestErrorSpam/setup (21.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-747000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-747000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 --driver=docker : (21.069601357s)
--- PASS: TestErrorSpam/setup (21.07s)

                                                
                                    
x
+
TestErrorSpam/start (2.07s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 start --dry-run
--- PASS: TestErrorSpam/start (2.07s)

                                                
                                    
x
+
TestErrorSpam/status (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 status
--- PASS: TestErrorSpam/status (1.21s)

                                                
                                    
x
+
TestErrorSpam/pause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 pause
--- PASS: TestErrorSpam/pause (1.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (11.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 stop: (10.777903331s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-747000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-747000 stop
--- PASS: TestErrorSpam/stop (11.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18779-7316/.minikube/files/etc/test/nested/copy/7854/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (35.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-558000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-558000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (35.279492069s)
--- PASS: TestFunctional/serial/StartWithProxy (35.28s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-558000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-558000 --alsologtostderr -v=8: (33.69564274s)
functional_test.go:659: soft start took 33.696223306s for "functional-558000" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.70s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-558000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 cache add registry.k8s.io/pause:3.1: (3.993060961s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 cache add registry.k8s.io/pause:3.3: (3.983425965s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 cache add registry.k8s.io/pause:latest: (2.774906727s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-558000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1153554063/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cache add minikube-local-cache-test:functional-558000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 cache add minikube-local-cache-test:functional-558000: (1.08580246s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cache delete minikube-local-cache-test:functional-558000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-558000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (381.251647ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 cache reload: (2.398519605s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 kubectl -- --context functional-558000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.97s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-558000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-558000 get pods: (1.360537906s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.36s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.21s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-558000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0430 19:41:06.567696    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:41:06.574297    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:41:06.584446    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:41:06.604653    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:41:06.646230    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:41:06.727811    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:41:06.888510    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:41:07.208793    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:41:07.850841    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:41:09.131194    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:41:11.691780    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:41:16.813055    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-558000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.205306692s)
functional_test.go:757: restart took 36.205430292s for "functional-558000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.21s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-558000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 logs: (2.948494741s)
--- PASS: TestFunctional/serial/LogsCmd (2.95s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3398010747/001/logs.txt
E0430 19:41:27.053196    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3398010747/001/logs.txt: (3.035975789s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.04s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.35s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-558000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-558000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-558000: exit status 115 (559.020324ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30724 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-558000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 config get cpus: exit status 14 (69.162589ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 config get cpus: exit status 14 (68.334162ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-558000 --alsologtostderr -v=1]
2024/04/30 19:44:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-558000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 10381: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-558000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-558000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (727.33882ms)

                                                
                                                
-- stdout --
	* [functional-558000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 19:44:17.503909   10286 out.go:291] Setting OutFile to fd 1 ...
	I0430 19:44:17.504112   10286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 19:44:17.504118   10286 out.go:304] Setting ErrFile to fd 2...
	I0430 19:44:17.504121   10286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 19:44:17.504291   10286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 19:44:17.505738   10286 out.go:298] Setting JSON to false
	I0430 19:44:17.528345   10286 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2628,"bootTime":1714528829,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0430 19:44:17.528447   10286 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0430 19:44:17.550585   10286 out.go:177] * [functional-558000] minikube v1.33.0 on Darwin 14.4.1
	I0430 19:44:17.616281   10286 out.go:177]   - MINIKUBE_LOCATION=18779
	I0430 19:44:17.594188   10286 notify.go:220] Checking for updates...
	I0430 19:44:17.675188   10286 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	I0430 19:44:17.735196   10286 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0430 19:44:17.763517   10286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0430 19:44:17.783826   10286 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	I0430 19:44:17.805002   10286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0430 19:44:17.826500   10286 config.go:182] Loaded profile config "functional-558000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 19:44:17.827289   10286 driver.go:392] Setting default libvirt URI to qemu:///system
	I0430 19:44:17.882338   10286 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0430 19:44:17.882506   10286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 19:44:17.995191   10286 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:false NGoroutines:105 SystemTime:2024-05-01 02:44:17.984665266 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 19:44:18.038619   10286 out.go:177] * Using the docker driver based on existing profile
	I0430 19:44:18.059351   10286 start.go:297] selected driver: docker
	I0430 19:44:18.059371   10286 start.go:901] validating driver "docker" against &{Name:functional-558000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-558000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 19:44:18.059444   10286 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0430 19:44:18.083598   10286 out.go:177] 
	W0430 19:44:18.104649   10286 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0430 19:44:18.125498   10286 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-558000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-558000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-558000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (656.214531ms)

                                                
                                                
-- stdout --
	* [functional-558000] minikube v1.33.0 sur Darwin 14.4.1
	  - MINIKUBE_LOCATION=18779
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 19:44:19.011419   10346 out.go:291] Setting OutFile to fd 1 ...
	I0430 19:44:19.011617   10346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 19:44:19.011622   10346 out.go:304] Setting ErrFile to fd 2...
	I0430 19:44:19.011626   10346 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 19:44:19.011915   10346 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 19:44:19.014015   10346 out.go:298] Setting JSON to false
	I0430 19:44:19.038351   10346 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2630,"bootTime":1714528829,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0430 19:44:19.038446   10346 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0430 19:44:19.060584   10346 out.go:177] * [functional-558000] minikube v1.33.0 sur Darwin 14.4.1
	I0430 19:44:19.123038   10346 out.go:177]   - MINIKUBE_LOCATION=18779
	I0430 19:44:19.101914   10346 notify.go:220] Checking for updates...
	I0430 19:44:19.166920   10346 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
	I0430 19:44:19.209136   10346 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0430 19:44:19.230056   10346 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0430 19:44:19.251122   10346 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube
	I0430 19:44:19.272101   10346 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0430 19:44:19.293241   10346 config.go:182] Loaded profile config "functional-558000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 19:44:19.293651   10346 driver.go:392] Setting default libvirt URI to qemu:///system
	I0430 19:44:19.347423   10346 docker.go:122] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0430 19:44:19.347599   10346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0430 19:44:19.456987   10346 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:false NGoroutines:105 SystemTime:2024-05-01 02:44:19.446000213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:23 KernelVersion:6.6.22-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6211080192 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-
0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev
SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/d
ocker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0430 19:44:19.478868   10346 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0430 19:44:19.499838   10346 start.go:297] selected driver: docker
	I0430 19:44:19.499873   10346 start.go:901] validating driver "docker" against &{Name:functional-558000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-558000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0430 19:44:19.500020   10346 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0430 19:44:19.525669   10346 out.go:177] 
	W0430 19:44:19.546850   10346 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0430 19:44:19.567841   10346 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6a769f25-6c92-4d0f-b2ab-089022afdf25] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005444714s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-558000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-558000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-558000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-558000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1efd7bd6-a77c-4376-a539-fa81af3652f4] Pending
helpers_test.go:344: "sp-pod" [1efd7bd6-a77c-4376-a539-fa81af3652f4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1efd7bd6-a77c-4376-a539-fa81af3652f4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.005948685s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-558000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-558000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-558000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [65d00703-70b4-4e9f-8b45-4e7bc9b96158] Pending
helpers_test.go:344: "sp-pod" [65d00703-70b4-4e9f-8b45-4e7bc9b96158] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [65d00703-70b4-4e9f-8b45-4e7bc9b96158] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005555541s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-558000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh -n functional-558000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cp functional-558000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd2746519113/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh -n functional-558000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh -n functional-558000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-558000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-n2hdc" [d3a5131b-9d7d-43af-af91-22c0f3744622] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-n2hdc" [d3a5131b-9d7d-43af-af91-22c0f3744622] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.004776659s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-558000 exec mysql-64454c8b5c-n2hdc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-558000 exec mysql-64454c8b5c-n2hdc -- mysql -ppassword -e "show databases;": exit status 1 (190.194658ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-558000 exec mysql-64454c8b5c-n2hdc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-558000 exec mysql-64454c8b5c-n2hdc -- mysql -ppassword -e "show databases;": exit status 1 (160.882413ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-558000 exec mysql-64454c8b5c-n2hdc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-558000 exec mysql-64454c8b5c-n2hdc -- mysql -ppassword -e "show databases;": exit status 1 (129.850892ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-558000 exec mysql-64454c8b5c-n2hdc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7854/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /etc/test/nested/copy/7854/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7854.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /etc/ssl/certs/7854.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7854.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /usr/share/ca-certificates/7854.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/78542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /etc/ssl/certs/78542.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/78542.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /usr/share/ca-certificates/78542.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-558000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh "sudo systemctl is-active crio": exit status 1 (456.027953ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-558000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-558000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-558000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-558000 image ls --format short --alsologtostderr:
I0430 19:44:33.664252   10408 out.go:291] Setting OutFile to fd 1 ...
I0430 19:44:33.664558   10408 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0430 19:44:33.664564   10408 out.go:304] Setting ErrFile to fd 2...
I0430 19:44:33.664568   10408 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0430 19:44:33.664761   10408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
I0430 19:44:33.665410   10408 config.go:182] Loaded profile config "functional-558000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0430 19:44:33.665514   10408 config.go:182] Loaded profile config "functional-558000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0430 19:44:33.665969   10408 cli_runner.go:164] Run: docker container inspect functional-558000 --format={{.State.Status}}
I0430 19:44:33.773399   10408 ssh_runner.go:195] Run: systemctl --version
I0430 19:44:33.773464   10408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558000
I0430 19:44:33.827387   10408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53063 SSHKeyPath:/Users/jenkins/minikube-integration/18779-7316/.minikube/machines/functional-558000/id_rsa Username:docker}
I0430 19:44:33.915581   10408 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-558000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | 7383c266ef252 | 188MB  |
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| gcr.io/google-containers/addon-resizer      | functional-558000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | f4215f6ee683f | 48.3MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-558000 | f50cd14ca2620 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-558000 image ls --format table --alsologtostderr:
I0430 19:44:34.493332   10441 out.go:291] Setting OutFile to fd 1 ...
I0430 19:44:34.493521   10441 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0430 19:44:34.493527   10441 out.go:304] Setting ErrFile to fd 2...
I0430 19:44:34.493531   10441 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0430 19:44:34.493731   10441 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
I0430 19:44:34.494334   10441 config.go:182] Loaded profile config "functional-558000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0430 19:44:34.494427   10441 config.go:182] Loaded profile config "functional-558000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0430 19:44:34.494840   10441 cli_runner.go:164] Run: docker container inspect functional-558000 --format={{.State.Status}}
I0430 19:44:34.549419   10441 ssh_runner.go:195] Run: systemctl --version
I0430 19:44:34.549504   10441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558000
I0430 19:44:34.600873   10441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53063 SSHKeyPath:/Users/jenkins/minikube-integration/18779-7316/.minikube/machines/functional-558000/id_rsa Username:docker}
I0430 19:44:34.685133   10441 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-558000 image ls --format json --alsologtostderr:
[{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"f50cd14ca2620f6ccb3c19eea1cd885774928190353a77d2322f0d65dd8d1573","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-558000"],"size":"30"},{"id":"f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"a0bf559e280cf431fceb938087d59de
eebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["regis
try.k8s.io/pause:latest"],"size":"240000"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-558000"],"size":"32900000"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"]
,"size":"246000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-558000 image ls --format json --alsologtostderr:
I0430 19:44:34.176716   10428 out.go:291] Setting OutFile to fd 1 ...
I0430 19:44:34.176919   10428 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0430 19:44:34.176925   10428 out.go:304] Setting ErrFile to fd 2...
I0430 19:44:34.176929   10428 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0430 19:44:34.177122   10428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
I0430 19:44:34.177837   10428 config.go:182] Loaded profile config "functional-558000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0430 19:44:34.177938   10428 config.go:182] Loaded profile config "functional-558000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0430 19:44:34.178328   10428 cli_runner.go:164] Run: docker container inspect functional-558000 --format={{.State.Status}}
I0430 19:44:34.235624   10428 ssh_runner.go:195] Run: systemctl --version
I0430 19:44:34.235705   10428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558000
I0430 19:44:34.288564   10428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53063 SSHKeyPath:/Users/jenkins/minikube-integration/18779-7316/.minikube/machines/functional-558000/id_rsa Username:docker}
I0430 19:44:34.374495   10428 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-558000 image ls --format yaml --alsologtostderr:
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-558000
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: f50cd14ca2620f6ccb3c19eea1cd885774928190353a77d2322f0d65dd8d1573
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-558000
size: "30"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-558000 image ls --format yaml --alsologtostderr:
I0430 19:44:33.862644   10416 out.go:291] Setting OutFile to fd 1 ...
I0430 19:44:33.862981   10416 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0430 19:44:33.862987   10416 out.go:304] Setting ErrFile to fd 2...
I0430 19:44:33.862991   10416 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0430 19:44:33.863157   10416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
I0430 19:44:33.863743   10416 config.go:182] Loaded profile config "functional-558000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0430 19:44:33.863838   10416 config.go:182] Loaded profile config "functional-558000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0430 19:44:33.864272   10416 cli_runner.go:164] Run: docker container inspect functional-558000 --format={{.State.Status}}
I0430 19:44:33.916566   10416 ssh_runner.go:195] Run: systemctl --version
I0430 19:44:33.916633   10416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558000
I0430 19:44:33.969887   10416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53063 SSHKeyPath:/Users/jenkins/minikube-integration/18779-7316/.minikube/machines/functional-558000/id_rsa Username:docker}
I0430 19:44:34.058239   10416 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh pgrep buildkitd: exit status 1 (376.683981ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image build -t localhost/my-image:functional-558000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 image build -t localhost/my-image:functional-558000 testdata/build --alsologtostderr: (4.955880473s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-558000 image build -t localhost/my-image:functional-558000 testdata/build --alsologtostderr:
I0430 19:44:34.408001   10439 out.go:291] Setting OutFile to fd 1 ...
I0430 19:44:34.421500   10439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0430 19:44:34.421519   10439 out.go:304] Setting ErrFile to fd 2...
I0430 19:44:34.421545   10439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0430 19:44:34.423012   10439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
I0430 19:44:34.424433   10439 config.go:182] Loaded profile config "functional-558000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0430 19:44:34.425362   10439 config.go:182] Loaded profile config "functional-558000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0430 19:44:34.426034   10439 cli_runner.go:164] Run: docker container inspect functional-558000 --format={{.State.Status}}
I0430 19:44:34.483142   10439 ssh_runner.go:195] Run: systemctl --version
I0430 19:44:34.483214   10439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558000
I0430 19:44:34.537630   10439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53063 SSHKeyPath:/Users/jenkins/minikube-integration/18779-7316/.minikube/machines/functional-558000/id_rsa Username:docker}
I0430 19:44:34.624112   10439 build_images.go:161] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.509096517.tar
I0430 19:44:34.624238   10439 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0430 19:44:34.633205   10439 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.509096517.tar
I0430 19:44:34.637248   10439 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.509096517.tar: stat -c "%s %y" /var/lib/minikube/build/build.509096517.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.509096517.tar': No such file or directory
I0430 19:44:34.637283   10439 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.509096517.tar --> /var/lib/minikube/build/build.509096517.tar (3072 bytes)
I0430 19:44:34.658500   10439 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.509096517
I0430 19:44:34.667791   10439 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.509096517 -xf /var/lib/minikube/build/build.509096517.tar
I0430 19:44:34.676782   10439 docker.go:360] Building image: /var/lib/minikube/build/build.509096517
I0430 19:44:34.676904   10439 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-558000 /var/lib/minikube/build/build.509096517
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 3.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:cd188827d963b595f9ad9cd0d501d13bb8b001ae1ba11721122610c297d00960 done
#8 naming to localhost/my-image:functional-558000 done
#8 DONE 0.0s
I0430 19:44:39.252167   10439 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-558000 /var/lib/minikube/build/build.509096517: (4.575294667s)
I0430 19:44:39.252222   10439 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.509096517
I0430 19:44:39.260797   10439 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.509096517.tar
I0430 19:44:39.269040   10439 build_images.go:217] Built localhost/my-image:functional-558000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.509096517.tar
I0430 19:44:39.269069   10439 build_images.go:133] succeeded building to: functional-558000
I0430 19:44:39.269074   10439 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.322098375s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-558000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.38s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-558000 docker-env) && out/minikube-darwin-amd64 status -p functional-558000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-558000 docker-env) && out/minikube-darwin-amd64 status -p functional-558000": (1.131537227s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-558000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image load --daemon gcr.io/google-containers/addon-resizer:functional-558000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 image load --daemon gcr.io/google-containers/addon-resizer:functional-558000 --alsologtostderr: (3.533237936s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image load --daemon gcr.io/google-containers/addon-resizer:functional-558000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 image load --daemon gcr.io/google-containers/addon-resizer:functional-558000 --alsologtostderr: (2.457221449s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0430 19:41:47.533334    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.385926031s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-558000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image load --daemon gcr.io/google-containers/addon-resizer:functional-558000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 image load --daemon gcr.io/google-containers/addon-resizer:functional-558000 --alsologtostderr: (4.189428668s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image save gcr.io/google-containers/addon-resizer:functional-558000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 image save gcr.io/google-containers/addon-resizer:functional-558000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.355809165s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image rm gcr.io/google-containers/addon-resizer:functional-558000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.153723035s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-558000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image save --daemon gcr.io/google-containers/addon-resizer:functional-558000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 image save --daemon gcr.io/google-containers/addon-resizer:functional-558000 --alsologtostderr: (1.126078788s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-558000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (104.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-558000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-558000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-dn2gt" [323c2d22-7b7e-4425-9e90-cd22bba14d76] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-dn2gt" [323c2d22-7b7e-4425-9e90-cd22bba14d76] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 1m44.005918061s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (104.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-558000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-558000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-558000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9829: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-558000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-558000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (71.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-558000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [78f4701e-7dad-44e9-abe8-e3bf12cc31ba] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0430 19:42:28.493401    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
helpers_test.go:344: "nginx-svc" [78f4701e-7dad-44e9-abe8-e3bf12cc31ba] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 1m11.004436527s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (71.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-558000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-558000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9858: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 service list -o json
functional_test.go:1490: Took "599.633327ms" to run "out/minikube-darwin-amd64 -p functional-558000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 service --namespace=default --https --url hello-node
E0430 19:43:50.413001    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 service --namespace=default --https --url hello-node: signal: killed (15.003393657s)

                                                
                                                
-- stdout --
	https://127.0.0.1:53397

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:53397
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "442.903806ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "85.645605ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "445.359064ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "85.991579ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port404214425/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714531441107252000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port404214425/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714531441107252000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port404214425/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714531441107252000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port404214425/001/test-1714531441107252000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (386.211963ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May  1 02:44 created-by-test
-rw-r--r-- 1 docker docker 24 May  1 02:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May  1 02:44 test-1714531441107252000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh cat /mount-9p/test-1714531441107252000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-558000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ca73acb9-bc67-4d88-a26d-036bfb2b0146] Pending
helpers_test.go:344: "busybox-mount" [ca73acb9-bc67-4d88-a26d-036bfb2b0146] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ca73acb9-bc67-4d88-a26d-036bfb2b0146] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ca73acb9-bc67-4d88-a26d-036bfb2b0146] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.003311449s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-558000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port404214425/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 service hello-node --url --format={{.IP}}: signal: killed (15.003395987s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3973192081/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (381.340595ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3973192081/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh "sudo umount -f /mount-9p": exit status 1 (351.67608ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-558000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port3973192081/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3477422425/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3477422425/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3477422425/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T" /mount1: exit status 1 (487.350136ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-558000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3477422425/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3477422425/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3477422425/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 service hello-node --url: signal: killed (15.003290246s)

                                                
                                                
-- stdout --
	http://127.0.0.1:53474

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:53474
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-558000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-558000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-558000
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (100.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-270000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0430 19:46:06.578931    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-270000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m38.98951727s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr: (1.105697116s)
--- PASS: TestMultiControlPlane/serial/StartCluster (100.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (10.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-270000 -- rollout status deployment/busybox: (8.17161293s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-725w9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-dfvb9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-m4hpg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-725w9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-dfvb9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-m4hpg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-725w9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-dfvb9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-m4hpg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (10.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-725w9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-725w9 -- sh -c "ping -c 1 192.168.65.254"
E0430 19:46:34.279343    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-dfvb9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-dfvb9 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-m4hpg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-270000 -- exec busybox-fc5497c4f-m4hpg -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (19.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-270000 -v=7 --alsologtostderr
E0430 19:46:41.228987    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:46:41.234457    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:46:41.245178    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:46:41.265313    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:46:41.305970    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:46:41.386195    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:46:41.546400    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:46:41.866574    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:46:42.506963    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:46:43.787321    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:46:46.348260    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:46:51.468736    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-270000 -v=7 --alsologtostderr: (17.913501203s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr: (1.364270206s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (19.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-270000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.142850508s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (24.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-darwin-amd64 -p ha-270000 status --output json -v=7 --alsologtostderr: (1.351747273s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp testdata/cp-test.txt ha-270000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile2918456730/001/cp-test_ha-270000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000:/home/docker/cp-test.txt ha-270000-m02:/home/docker/cp-test_ha-270000_ha-270000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test_ha-270000_ha-270000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000:/home/docker/cp-test.txt ha-270000-m03:/home/docker/cp-test_ha-270000_ha-270000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test_ha-270000_ha-270000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000:/home/docker/cp-test.txt ha-270000-m04:/home/docker/cp-test_ha-270000_ha-270000-m04.txt
E0430 19:47:01.709525    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test_ha-270000_ha-270000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp testdata/cp-test.txt ha-270000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile2918456730/001/cp-test_ha-270000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000-m02:/home/docker/cp-test.txt ha-270000:/home/docker/cp-test_ha-270000-m02_ha-270000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test_ha-270000-m02_ha-270000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000-m02:/home/docker/cp-test.txt ha-270000-m03:/home/docker/cp-test_ha-270000-m02_ha-270000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test_ha-270000-m02_ha-270000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000-m02:/home/docker/cp-test.txt ha-270000-m04:/home/docker/cp-test_ha-270000-m02_ha-270000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test_ha-270000-m02_ha-270000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp testdata/cp-test.txt ha-270000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile2918456730/001/cp-test_ha-270000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt ha-270000:/home/docker/cp-test_ha-270000-m03_ha-270000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test_ha-270000-m03_ha-270000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt ha-270000-m02:/home/docker/cp-test_ha-270000-m03_ha-270000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test_ha-270000-m03_ha-270000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000-m03:/home/docker/cp-test.txt ha-270000-m04:/home/docker/cp-test_ha-270000-m03_ha-270000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test_ha-270000-m03_ha-270000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp testdata/cp-test.txt ha-270000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile2918456730/001/cp-test_ha-270000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt ha-270000:/home/docker/cp-test_ha-270000-m04_ha-270000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000 "sudo cat /home/docker/cp-test_ha-270000-m04_ha-270000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt ha-270000-m02:/home/docker/cp-test_ha-270000-m04_ha-270000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m02 "sudo cat /home/docker/cp-test_ha-270000-m04_ha-270000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 cp ha-270000-m04:/home/docker/cp-test.txt ha-270000-m03:/home/docker/cp-test_ha-270000-m04_ha-270000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 ssh -n ha-270000-m03 "sudo cat /home/docker/cp-test_ha-270000-m04_ha-270000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (24.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 node stop m02 -v=7 --alsologtostderr
E0430 19:47:22.190073    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-270000 node stop m02 -v=7 --alsologtostderr: (10.843366073s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr: exit status 7 (1.04835013s)

                                                
                                                
-- stdout --
	ha-270000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-270000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-270000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-270000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 19:47:31.319040   11668 out.go:291] Setting OutFile to fd 1 ...
	I0430 19:47:31.319278   11668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 19:47:31.319284   11668 out.go:304] Setting ErrFile to fd 2...
	I0430 19:47:31.319288   11668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 19:47:31.319482   11668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 19:47:31.319669   11668 out.go:298] Setting JSON to false
	I0430 19:47:31.319691   11668 mustload.go:65] Loading cluster: ha-270000
	I0430 19:47:31.319728   11668 notify.go:220] Checking for updates...
	I0430 19:47:31.321099   11668 config.go:182] Loaded profile config "ha-270000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 19:47:31.321121   11668 status.go:255] checking status of ha-270000 ...
	I0430 19:47:31.321530   11668 cli_runner.go:164] Run: docker container inspect ha-270000 --format={{.State.Status}}
	I0430 19:47:31.373538   11668 status.go:330] ha-270000 host status = "Running" (err=<nil>)
	I0430 19:47:31.373578   11668 host.go:66] Checking if "ha-270000" exists ...
	I0430 19:47:31.373812   11668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-270000
	I0430 19:47:31.424671   11668 host.go:66] Checking if "ha-270000" exists ...
	I0430 19:47:31.424990   11668 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 19:47:31.425067   11668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-270000
	I0430 19:47:31.476231   11668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53542 SSHKeyPath:/Users/jenkins/minikube-integration/18779-7316/.minikube/machines/ha-270000/id_rsa Username:docker}
	I0430 19:47:31.561938   11668 ssh_runner.go:195] Run: systemctl --version
	I0430 19:47:31.566351   11668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0430 19:47:31.576645   11668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-270000
	I0430 19:47:31.628141   11668 kubeconfig.go:125] found "ha-270000" server: "https://127.0.0.1:53541"
	I0430 19:47:31.628172   11668 api_server.go:166] Checking apiserver status ...
	I0430 19:47:31.628210   11668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0430 19:47:31.638770   11668 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2215/cgroup
	W0430 19:47:31.648207   11668 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0430 19:47:31.648262   11668 ssh_runner.go:195] Run: ls
	I0430 19:47:31.651917   11668 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53541/healthz ...
	I0430 19:47:31.656662   11668 api_server.go:279] https://127.0.0.1:53541/healthz returned 200:
	ok
	I0430 19:47:31.656677   11668 status.go:422] ha-270000 apiserver status = Running (err=<nil>)
	I0430 19:47:31.656689   11668 status.go:257] ha-270000 status: &{Name:ha-270000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0430 19:47:31.656700   11668 status.go:255] checking status of ha-270000-m02 ...
	I0430 19:47:31.656934   11668 cli_runner.go:164] Run: docker container inspect ha-270000-m02 --format={{.State.Status}}
	I0430 19:47:31.709293   11668 status.go:330] ha-270000-m02 host status = "Stopped" (err=<nil>)
	I0430 19:47:31.709317   11668 status.go:343] host is not running, skipping remaining checks
	I0430 19:47:31.709327   11668 status.go:257] ha-270000-m02 status: &{Name:ha-270000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0430 19:47:31.709346   11668 status.go:255] checking status of ha-270000-m03 ...
	I0430 19:47:31.709626   11668 cli_runner.go:164] Run: docker container inspect ha-270000-m03 --format={{.State.Status}}
	I0430 19:47:31.763986   11668 status.go:330] ha-270000-m03 host status = "Running" (err=<nil>)
	I0430 19:47:31.764033   11668 host.go:66] Checking if "ha-270000-m03" exists ...
	I0430 19:47:31.764331   11668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-270000-m03
	I0430 19:47:31.820487   11668 host.go:66] Checking if "ha-270000-m03" exists ...
	I0430 19:47:31.820738   11668 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 19:47:31.820794   11668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-270000-m03
	I0430 19:47:31.872650   11668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53645 SSHKeyPath:/Users/jenkins/minikube-integration/18779-7316/.minikube/machines/ha-270000-m03/id_rsa Username:docker}
	I0430 19:47:31.960024   11668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0430 19:47:31.971054   11668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-270000
	I0430 19:47:32.025456   11668 kubeconfig.go:125] found "ha-270000" server: "https://127.0.0.1:53541"
	I0430 19:47:32.025480   11668 api_server.go:166] Checking apiserver status ...
	I0430 19:47:32.025522   11668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0430 19:47:32.038327   11668 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2118/cgroup
	W0430 19:47:32.048155   11668 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2118/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0430 19:47:32.048230   11668 ssh_runner.go:195] Run: ls
	I0430 19:47:32.052745   11668 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53541/healthz ...
	I0430 19:47:32.056725   11668 api_server.go:279] https://127.0.0.1:53541/healthz returned 200:
	ok
	I0430 19:47:32.056738   11668 status.go:422] ha-270000-m03 apiserver status = Running (err=<nil>)
	I0430 19:47:32.056746   11668 status.go:257] ha-270000-m03 status: &{Name:ha-270000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0430 19:47:32.056757   11668 status.go:255] checking status of ha-270000-m04 ...
	I0430 19:47:32.057009   11668 cli_runner.go:164] Run: docker container inspect ha-270000-m04 --format={{.State.Status}}
	I0430 19:47:32.107419   11668 status.go:330] ha-270000-m04 host status = "Running" (err=<nil>)
	I0430 19:47:32.107446   11668 host.go:66] Checking if "ha-270000-m04" exists ...
	I0430 19:47:32.107715   11668 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-270000-m04
	I0430 19:47:32.157531   11668 host.go:66] Checking if "ha-270000-m04" exists ...
	I0430 19:47:32.157799   11668 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0430 19:47:32.157852   11668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-270000-m04
	I0430 19:47:32.208611   11668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53770 SSHKeyPath:/Users/jenkins/minikube-integration/18779-7316/.minikube/machines/ha-270000-m04/id_rsa Username:docker}
	I0430 19:47:32.293441   11668 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0430 19:47:32.303679   11668 status.go:257] ha-270000-m04 status: &{Name:ha-270000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (60.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 node start m02 -v=7 --alsologtostderr
E0430 19:48:03.149969    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-270000 node start m02 -v=7 --alsologtostderr: (59.585653164s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr: (1.332702986s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (60.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.124413486s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (230.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-270000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-270000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-270000 -v=7 --alsologtostderr: (34.2969491s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-270000 --wait=true -v=7 --alsologtostderr
E0430 19:49:25.069682    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:51:06.591808    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:51:41.227183    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 19:52:08.908230    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-270000 --wait=true -v=7 --alsologtostderr: (3m16.167197869s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-270000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (230.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-270000 node delete m03 -v=7 --alsologtostderr: (10.554659963s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr: (1.000012013s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-270000 stop -v=7 --alsologtostderr: (32.536193237s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr: exit status 7 (213.503139ms)

                                                
                                                
-- stdout --
	ha-270000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-270000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-270000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0430 19:53:10.939858   12361 out.go:291] Setting OutFile to fd 1 ...
	I0430 19:53:10.940048   12361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 19:53:10.940054   12361 out.go:304] Setting ErrFile to fd 2...
	I0430 19:53:10.940058   12361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0430 19:53:10.940236   12361 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18779-7316/.minikube/bin
	I0430 19:53:10.940419   12361 out.go:298] Setting JSON to false
	I0430 19:53:10.940444   12361 mustload.go:65] Loading cluster: ha-270000
	I0430 19:53:10.940481   12361 notify.go:220] Checking for updates...
	I0430 19:53:10.940745   12361 config.go:182] Loaded profile config "ha-270000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0430 19:53:10.940759   12361 status.go:255] checking status of ha-270000 ...
	I0430 19:53:10.941140   12361 cli_runner.go:164] Run: docker container inspect ha-270000 --format={{.State.Status}}
	I0430 19:53:10.991167   12361 status.go:330] ha-270000 host status = "Stopped" (err=<nil>)
	I0430 19:53:10.991188   12361 status.go:343] host is not running, skipping remaining checks
	I0430 19:53:10.991194   12361 status.go:257] ha-270000 status: &{Name:ha-270000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0430 19:53:10.991211   12361 status.go:255] checking status of ha-270000-m02 ...
	I0430 19:53:10.991448   12361 cli_runner.go:164] Run: docker container inspect ha-270000-m02 --format={{.State.Status}}
	I0430 19:53:11.040267   12361 status.go:330] ha-270000-m02 host status = "Stopped" (err=<nil>)
	I0430 19:53:11.040304   12361 status.go:343] host is not running, skipping remaining checks
	I0430 19:53:11.040314   12361 status.go:257] ha-270000-m02 status: &{Name:ha-270000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0430 19:53:11.040336   12361 status.go:255] checking status of ha-270000-m04 ...
	I0430 19:53:11.040617   12361 cli_runner.go:164] Run: docker container inspect ha-270000-m04 --format={{.State.Status}}
	I0430 19:53:11.089591   12361 status.go:330] ha-270000-m04 host status = "Stopped" (err=<nil>)
	I0430 19:53:11.089624   12361 status.go:343] host is not running, skipping remaining checks
	I0430 19:53:11.089635   12361 status.go:257] ha-270000-m04 status: &{Name:ha-270000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (83.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-270000 --wait=true -v=7 --alsologtostderr --driver=docker 
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-270000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m22.704045434s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr: (1.047797231s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (83.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-270000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-270000 --control-plane -v=7 --alsologtostderr: (36.179616344s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-darwin-amd64 -p ha-270000 status -v=7 --alsologtostderr: (1.359127657s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.125164119s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.13s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (19.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-604000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-604000 --driver=docker : (19.708006439s)
--- PASS: TestImageBuild/serial/Setup (19.71s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-604000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-604000: (4.015851768s)
--- PASS: TestImageBuild/serial/NormalBuild (4.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-604000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-604000: (1.548148465s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-604000
image_test.go:133: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-604000: (1.253674086s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.25s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-604000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-604000: (1.372153283s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.37s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.61s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-170000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0430 19:56:06.589596    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0430 19:56:41.224400    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-170000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m14.607067436s)
--- PASS: TestJSONOutput/start/Command (74.61s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-170000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-170000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-170000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-170000 --output=json --user=testUser: (10.789676752s)
--- PASS: TestJSONOutput/stop/Command (10.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-264000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-264000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (395.84793ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e16e3395-8307-41c1-a163-740dc7cbd45f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-264000] minikube v1.33.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2a660c4c-ed75-49ae-8cfd-d1cd34c7bdfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18779"}}
	{"specversion":"1.0","id":"8cf9b658-6e56-4fa2-86a5-1c9c9fc6b9bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig"}}
	{"specversion":"1.0","id":"17649f8f-3160-4c69-ba79-7a98ba87eec2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"ee1425ea-9061-4068-b40b-2d93c918ed79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eedefb28-25c6-4c6b-8446-326867771f81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18779-7316/.minikube"}}
	{"specversion":"1.0","id":"1a778427-60a6-4e3d-9fa7-5856f379be49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"22612997-12ed-4442-962e-3409c9030854","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-264000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-264000
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-667000 --network=
E0430 19:57:29.635906    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-667000 --network=: (20.50778131s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-667000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-667000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-667000: (2.434043966s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.99s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-001000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-001000 --network=bridge: (19.851464043s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-001000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-001000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-001000: (2.202489071s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.11s)

                                                
                                    
x
+
TestKicExistingNetwork (21.76s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-668000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-668000 --network=existing-network: (19.329934904s)
helpers_test.go:175: Cleaning up "existing-network-668000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-668000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-668000: (2.051424755s)
--- PASS: TestKicExistingNetwork (21.76s)

                                                
                                    
x
+
TestKicCustomSubnet (22.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-050000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-050000 --subnet=192.168.60.0/24: (19.828456248s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-050000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-050000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-050000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-050000: (2.358718349s)
--- PASS: TestKicCustomSubnet (22.24s)

                                                
                                    
x
+
TestKicStaticIP (23.22s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-942000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-942000 --static-ip=192.168.200.200: (20.516969469s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-942000 ip
helpers_test.go:175: Cleaning up "static-ip-942000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-942000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-942000: (2.470308549s)
--- PASS: TestKicStaticIP (23.22s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (47.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-309000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-309000 --driver=docker : (19.887157689s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-312000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-312000 --driver=docker : (20.496489618s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-309000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-312000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-312000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-312000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-312000: (2.41807447s)
helpers_test.go:175: Cleaning up "first-309000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-309000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-309000: (2.394477873s)
--- PASS: TestMinikubeProfile (47.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-677000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-677000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.422191062s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.42s)

                                                
                                    
x
+
TestPreload (133s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-588000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0430 20:46:41.431213    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0430 20:47:29.849852    7854 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18779-7316/.minikube/profiles/addons-257000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-588000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m32.507430129s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-588000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-588000 image pull gcr.io/k8s-minikube/busybox: (5.973834743s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-588000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-588000: (10.847506269s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-588000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-588000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (20.829935742s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-588000 image list
helpers_test.go:175: Cleaning up "test-preload-588000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-588000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-588000: (2.519542264s)
--- PASS: TestPreload (133.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.21s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18779
- KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4111566535/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4111566535/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4111566535/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4111566535/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.21s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.7s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18779
- KUBECONFIG=/Users/jenkins/minikube-integration/18779-7316/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3740411204/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3740411204/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3740411204/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3740411204/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.70s)

                                                
                                    

Test skip (17/201)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 12.989388ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-cgz9n" [5e55ae79-0cfb-470a-8883-85c9e9df5547] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00556986s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-t9jcq" [e2155a0c-8d42-477f-96d0-650d83f6824b] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005152342s
addons_test.go:340: (dbg) Run:  kubectl --context addons-257000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-257000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-257000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.601105277s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (18.68s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-257000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-257000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-257000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ff798f02-8985-4a1b-9401-7d0b06d1e0f8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ff798f02-8985-4a1b-9401-7d0b06d1e0f8] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004755942s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.93s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-558000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-558000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-w876k" [d0ae00fc-1e1f-411e-858e-db018b85babf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-w876k" [d0ae00fc-1e1f-411e-858e-db018b85babf] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005666581s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard