Test Report: Docker_macOS 19312

                    
                      c58167e77f3b0efe0c3c561ff8e0552b34c41906:2024-07-21:35447
                    
                

Test fail (22/210)

x
+
TestOffline (754.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-989000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-989000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m33.467838275s)

                                                
                                                
-- stdout --
	* [offline-docker-989000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-989000" primary control-plane node in "offline-docker-989000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-989000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:54:25.498240   10001 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:54:25.498542   10001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:54:25.498548   10001 out.go:304] Setting ErrFile to fd 2...
	I0721 17:54:25.498552   10001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:54:25.498730   10001 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:54:25.500279   10001 out.go:298] Setting JSON to false
	I0721 17:54:25.523726   10001 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6835,"bootTime":1721602830,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0721 17:54:25.523814   10001 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:54:25.545612   10001 out.go:177] * [offline-docker-989000] minikube v1.33.1 on Darwin 14.5
	I0721 17:54:25.587379   10001 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:54:25.587396   10001 notify.go:220] Checking for updates...
	I0721 17:54:25.629119   10001 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	I0721 17:54:25.650343   10001 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0721 17:54:25.671426   10001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:54:25.692081   10001 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	I0721 17:54:25.713380   10001 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:54:25.734423   10001 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:54:25.758052   10001 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0721 17:54:25.758236   10001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 17:54:25.838295   10001 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:false NGoroutines:170 SystemTime:2024-07-22 00:54:25.828023778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 17:54:25.880216   10001 out.go:177] * Using the docker driver based on user configuration
	I0721 17:54:25.901497   10001 start.go:297] selected driver: docker
	I0721 17:54:25.901527   10001 start.go:901] validating driver "docker" against <nil>
	I0721 17:54:25.901543   10001 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:54:25.905777   10001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 17:54:26.000143   10001 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:95 OomKillDisable:false NGoroutines:170 SystemTime:2024-07-22 00:54:25.990298823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 17:54:26.000318   10001 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:54:26.000512   10001 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:54:26.021287   10001 out.go:177] * Using Docker Desktop driver with root privileges
	I0721 17:54:26.042158   10001 cni.go:84] Creating CNI manager for ""
	I0721 17:54:26.042183   10001 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:54:26.042191   10001 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:54:26.042247   10001 start.go:340] cluster config:
	{Name:offline-docker-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:54:26.063397   10001 out.go:177] * Starting "offline-docker-989000" primary control-plane node in "offline-docker-989000" cluster
	I0721 17:54:26.106542   10001 cache.go:121] Beginning downloading kic base image for docker with docker
	I0721 17:54:26.128301   10001 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0721 17:54:26.170524   10001 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:54:26.170597   10001 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 17:54:26.170618   10001 cache.go:56] Caching tarball of preloaded images
	I0721 17:54:26.170635   10001 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0721 17:54:26.170857   10001 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 17:54:26.170877   10001 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:54:26.172384   10001 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/offline-docker-989000/config.json ...
	I0721 17:54:26.172510   10001 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/offline-docker-989000/config.json: {Name:mk6bb0aa4ab6cb318e0f37e0cecb4aaa454e397c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0721 17:54:26.245236   10001 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0721 17:54:26.245249   10001 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0721 17:54:26.245414   10001 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0721 17:54:26.245432   10001 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0721 17:54:26.245438   10001 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0721 17:54:26.245447   10001 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0721 17:54:26.245452   10001 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0721 17:54:26.962593   10001 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0721 17:54:26.962646   10001 cache.go:194] Successfully downloaded all kic artifacts
	I0721 17:54:26.962693   10001 start.go:360] acquireMachinesLock for offline-docker-989000: {Name:mkcfe548e1b3dd15869a1675aeb6ea0c8fdd2869 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:54:26.962869   10001 start.go:364] duration metric: took 163.278µs to acquireMachinesLock for "offline-docker-989000"
	I0721 17:54:26.962899   10001 start.go:93] Provisioning new machine with config: &{Name:offline-docker-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-989000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:54:26.963003   10001 start.go:125] createHost starting for "" (driver="docker")
	I0721 17:54:27.005391   10001 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0721 17:54:27.005750   10001 start.go:159] libmachine.API.Create for "offline-docker-989000" (driver="docker")
	I0721 17:54:27.005794   10001 client.go:168] LocalClient.Create starting
	I0721 17:54:27.005904   10001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 17:54:27.005960   10001 main.go:141] libmachine: Decoding PEM data...
	I0721 17:54:27.005977   10001 main.go:141] libmachine: Parsing certificate...
	I0721 17:54:27.006052   10001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 17:54:27.006094   10001 main.go:141] libmachine: Decoding PEM data...
	I0721 17:54:27.006111   10001 main.go:141] libmachine: Parsing certificate...
	I0721 17:54:27.006735   10001 cli_runner.go:164] Run: docker network inspect offline-docker-989000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 17:54:27.025023   10001 cli_runner.go:211] docker network inspect offline-docker-989000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 17:54:27.025146   10001 network_create.go:284] running [docker network inspect offline-docker-989000] to gather additional debugging logs...
	I0721 17:54:27.025167   10001 cli_runner.go:164] Run: docker network inspect offline-docker-989000
	W0721 17:54:27.093633   10001 cli_runner.go:211] docker network inspect offline-docker-989000 returned with exit code 1
	I0721 17:54:27.093661   10001 network_create.go:287] error running [docker network inspect offline-docker-989000]: docker network inspect offline-docker-989000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-989000 not found
	I0721 17:54:27.093676   10001 network_create.go:289] output of [docker network inspect offline-docker-989000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-989000 not found
	
	** /stderr **
	I0721 17:54:27.093821   10001 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 17:54:27.119327   10001 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:54:27.120941   10001 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:54:27.121300   10001 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014729a0}
	I0721 17:54:27.121316   10001 network_create.go:124] attempt to create docker network offline-docker-989000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0721 17:54:27.121394   10001 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-989000 offline-docker-989000
	W0721 17:54:27.139803   10001 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-989000 offline-docker-989000 returned with exit code 1
	W0721 17:54:27.139834   10001 network_create.go:149] failed to create docker network offline-docker-989000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-989000 offline-docker-989000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0721 17:54:27.139855   10001 network_create.go:116] failed to create docker network offline-docker-989000 192.168.67.0/24, will retry: subnet is taken
	I0721 17:54:27.141324   10001 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:54:27.141702   10001 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014ecc60}
	I0721 17:54:27.141714   10001 network_create.go:124] attempt to create docker network offline-docker-989000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0721 17:54:27.141784   10001 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-989000 offline-docker-989000
	I0721 17:54:27.317980   10001 network_create.go:108] docker network offline-docker-989000 192.168.76.0/24 created
	I0721 17:54:27.318034   10001 kic.go:121] calculated static IP "192.168.76.2" for the "offline-docker-989000" container
	I0721 17:54:27.318154   10001 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 17:54:27.338189   10001 cli_runner.go:164] Run: docker volume create offline-docker-989000 --label name.minikube.sigs.k8s.io=offline-docker-989000 --label created_by.minikube.sigs.k8s.io=true
	I0721 17:54:27.357835   10001 oci.go:103] Successfully created a docker volume offline-docker-989000
	I0721 17:54:27.357940   10001 cli_runner.go:164] Run: docker run --rm --name offline-docker-989000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-989000 --entrypoint /usr/bin/test -v offline-docker-989000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 17:54:28.050879   10001 oci.go:107] Successfully prepared a docker volume offline-docker-989000
	I0721 17:54:28.050928   10001 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:54:28.050959   10001 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 17:54:28.051108   10001 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-989000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0721 18:00:27.003240   10001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 18:00:27.003382   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:00:27.023210   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:00:27.023337   10001 retry.go:31] will retry after 207.130608ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:27.232847   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:00:27.252969   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:00:27.253076   10001 retry.go:31] will retry after 206.710224ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:27.461539   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:00:27.481068   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:00:27.481181   10001 retry.go:31] will retry after 785.057724ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:28.268654   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:00:28.289679   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	W0721 18:00:28.289781   10001 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	
	W0721 18:00:28.289818   10001 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:28.289880   10001 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 18:00:28.289932   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:00:28.307071   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:00:28.307167   10001 retry.go:31] will retry after 238.204354ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:28.547051   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:00:28.567010   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:00:28.567098   10001 retry.go:31] will retry after 335.553509ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:28.903890   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:00:28.924445   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:00:28.924574   10001 retry.go:31] will retry after 384.619674ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:29.311007   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:00:29.330836   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:00:29.330925   10001 retry.go:31] will retry after 951.361763ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:30.282571   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:00:30.301187   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	W0721 18:00:30.301295   10001 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	
	W0721 18:00:30.301313   10001 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:30.301329   10001 start.go:128] duration metric: took 6m3.341291419s to createHost
	I0721 18:00:30.301336   10001 start.go:83] releasing machines lock for "offline-docker-989000", held for 6m3.34144539s
	W0721 18:00:30.301350   10001 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0721 18:00:30.301796   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:30.319514   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	I0721 18:00:30.319563   10001 delete.go:82] Unable to get host status for offline-docker-989000, assuming it has already been deleted: state: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	W0721 18:00:30.319637   10001 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0721 18:00:30.319648   10001 start.go:729] Will try again in 5 seconds ...
	I0721 18:00:35.320721   10001 start.go:360] acquireMachinesLock for offline-docker-989000: {Name:mkcfe548e1b3dd15869a1675aeb6ea0c8fdd2869 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 18:00:35.320921   10001 start.go:364] duration metric: took 157.055µs to acquireMachinesLock for "offline-docker-989000"
	I0721 18:00:35.320961   10001 start.go:96] Skipping create...Using existing machine configuration
	I0721 18:00:35.320981   10001 fix.go:54] fixHost starting: 
	I0721 18:00:35.321457   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:35.341739   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	I0721 18:00:35.341799   10001 fix.go:112] recreateIfNeeded on offline-docker-989000: state= err=unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:35.341823   10001 fix.go:117] machineExists: false. err=machine does not exist
	I0721 18:00:35.363466   10001 out.go:177] * docker "offline-docker-989000" container is missing, will recreate.
	I0721 18:00:35.385426   10001 delete.go:124] DEMOLISHING offline-docker-989000 ...
	I0721 18:00:35.385598   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:35.404623   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	W0721 18:00:35.404714   10001 stop.go:83] unable to get state: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:35.404732   10001 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:35.405167   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:35.422335   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	I0721 18:00:35.422449   10001 delete.go:82] Unable to get host status for offline-docker-989000, assuming it has already been deleted: state: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:35.422551   10001 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-989000
	W0721 18:00:35.439736   10001 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-989000 returned with exit code 1
	I0721 18:00:35.439773   10001 kic.go:371] could not find the container offline-docker-989000 to remove it. will try anyways
	I0721 18:00:35.439858   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:35.457360   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	W0721 18:00:35.457417   10001 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:35.457503   10001 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-989000 /bin/bash -c "sudo init 0"
	W0721 18:00:35.474738   10001 cli_runner.go:211] docker exec --privileged -t offline-docker-989000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0721 18:00:35.474772   10001 oci.go:650] error shutdown offline-docker-989000: docker exec --privileged -t offline-docker-989000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:36.477191   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:36.496538   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	I0721 18:00:36.496591   10001 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:36.496605   10001 oci.go:664] temporary error: container offline-docker-989000 status is  but expect it to be exited
	I0721 18:00:36.496626   10001 retry.go:31] will retry after 551.164264ms: couldn't verify container is exited. %v: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:37.050221   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:37.069979   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	I0721 18:00:37.070035   10001 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:37.070050   10001 oci.go:664] temporary error: container offline-docker-989000 status is  but expect it to be exited
	I0721 18:00:37.070077   10001 retry.go:31] will retry after 405.812148ms: couldn't verify container is exited. %v: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:37.477020   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:37.496853   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	I0721 18:00:37.496901   10001 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:37.496909   10001 oci.go:664] temporary error: container offline-docker-989000 status is  but expect it to be exited
	I0721 18:00:37.496932   10001 retry.go:31] will retry after 856.045463ms: couldn't verify container is exited. %v: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:38.355410   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:38.375972   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	I0721 18:00:38.376025   10001 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:38.376034   10001 oci.go:664] temporary error: container offline-docker-989000 status is  but expect it to be exited
	I0721 18:00:38.376055   10001 retry.go:31] will retry after 1.230071455s: couldn't verify container is exited. %v: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:39.607468   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:39.627680   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	I0721 18:00:39.627727   10001 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:39.627738   10001 oci.go:664] temporary error: container offline-docker-989000 status is  but expect it to be exited
	I0721 18:00:39.627764   10001 retry.go:31] will retry after 3.297438205s: couldn't verify container is exited. %v: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:42.926713   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:42.947010   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	I0721 18:00:42.947058   10001 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:42.947067   10001 oci.go:664] temporary error: container offline-docker-989000 status is  but expect it to be exited
	I0721 18:00:42.947089   10001 retry.go:31] will retry after 3.154773105s: couldn't verify container is exited. %v: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:46.102477   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:46.121444   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	I0721 18:00:46.121489   10001 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:46.121500   10001 oci.go:664] temporary error: container offline-docker-989000 status is  but expect it to be exited
	I0721 18:00:46.121523   10001 retry.go:31] will retry after 5.015422135s: couldn't verify container is exited. %v: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:51.139223   10001 cli_runner.go:164] Run: docker container inspect offline-docker-989000 --format={{.State.Status}}
	W0721 18:00:51.158191   10001 cli_runner.go:211] docker container inspect offline-docker-989000 --format={{.State.Status}} returned with exit code 1
	I0721 18:00:51.158246   10001 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:00:51.158257   10001 oci.go:664] temporary error: container offline-docker-989000 status is  but expect it to be exited
	I0721 18:00:51.158291   10001 oci.go:88] couldn't shut down offline-docker-989000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	 
	I0721 18:00:51.158371   10001 cli_runner.go:164] Run: docker rm -f -v offline-docker-989000
	I0721 18:00:51.176051   10001 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-989000
	W0721 18:00:51.193238   10001 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-989000 returned with exit code 1
	I0721 18:00:51.193357   10001 cli_runner.go:164] Run: docker network inspect offline-docker-989000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 18:00:51.211019   10001 cli_runner.go:164] Run: docker network rm offline-docker-989000
	I0721 18:00:51.292913   10001 fix.go:124] Sleeping 1 second for extra luck!
	I0721 18:00:52.295065   10001 start.go:125] createHost starting for "" (driver="docker")
	I0721 18:00:52.317195   10001 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0721 18:00:52.317361   10001 start.go:159] libmachine.API.Create for "offline-docker-989000" (driver="docker")
	I0721 18:00:52.317397   10001 client.go:168] LocalClient.Create starting
	I0721 18:00:52.317632   10001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 18:00:52.317733   10001 main.go:141] libmachine: Decoding PEM data...
	I0721 18:00:52.317757   10001 main.go:141] libmachine: Parsing certificate...
	I0721 18:00:52.317850   10001 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 18:00:52.317927   10001 main.go:141] libmachine: Decoding PEM data...
	I0721 18:00:52.317941   10001 main.go:141] libmachine: Parsing certificate...
	I0721 18:00:52.339714   10001 cli_runner.go:164] Run: docker network inspect offline-docker-989000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 18:00:52.358988   10001 cli_runner.go:211] docker network inspect offline-docker-989000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 18:00:52.359088   10001 network_create.go:284] running [docker network inspect offline-docker-989000] to gather additional debugging logs...
	I0721 18:00:52.359104   10001 cli_runner.go:164] Run: docker network inspect offline-docker-989000
	W0721 18:00:52.376553   10001 cli_runner.go:211] docker network inspect offline-docker-989000 returned with exit code 1
	I0721 18:00:52.376587   10001 network_create.go:287] error running [docker network inspect offline-docker-989000]: docker network inspect offline-docker-989000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-989000 not found
	I0721 18:00:52.376599   10001 network_create.go:289] output of [docker network inspect offline-docker-989000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-989000 not found
	
	** /stderr **
	I0721 18:00:52.376737   10001 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 18:00:52.396414   10001 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:00:52.397744   10001 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:00:52.399176   10001 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:00:52.400867   10001 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:00:52.402625   10001 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:00:52.403406   10001 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00159f9d0}
	I0721 18:00:52.403431   10001 network_create.go:124] attempt to create docker network offline-docker-989000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0721 18:00:52.403561   10001 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-989000 offline-docker-989000
	I0721 18:00:52.467530   10001 network_create.go:108] docker network offline-docker-989000 192.168.94.0/24 created
	I0721 18:00:52.467561   10001 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-989000" container
	I0721 18:00:52.467677   10001 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 18:00:52.486727   10001 cli_runner.go:164] Run: docker volume create offline-docker-989000 --label name.minikube.sigs.k8s.io=offline-docker-989000 --label created_by.minikube.sigs.k8s.io=true
	I0721 18:00:52.503892   10001 oci.go:103] Successfully created a docker volume offline-docker-989000
	I0721 18:00:52.504035   10001 cli_runner.go:164] Run: docker run --rm --name offline-docker-989000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-989000 --entrypoint /usr/bin/test -v offline-docker-989000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 18:00:52.760921   10001 oci.go:107] Successfully prepared a docker volume offline-docker-989000
	I0721 18:00:52.760974   10001 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 18:00:52.760991   10001 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 18:00:52.761096   10001 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-989000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0721 18:06:52.405941   10001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 18:06:52.406073   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:52.425106   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:52.425218   10001 retry.go:31] will retry after 219.437075ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:52.647077   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:52.667341   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:52.667465   10001 retry.go:31] will retry after 323.15394ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:52.992187   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:53.012175   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:53.012276   10001 retry.go:31] will retry after 631.510177ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:53.646223   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:53.666201   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	W0721 18:06:53.666321   10001 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	
	W0721 18:06:53.666342   10001 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:53.666406   10001 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 18:06:53.666467   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:53.683674   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:53.683784   10001 retry.go:31] will retry after 358.127391ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:54.044341   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:54.064830   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:54.064930   10001 retry.go:31] will retry after 517.58666ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:54.584919   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:54.604982   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:54.605093   10001 retry.go:31] will retry after 793.11573ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:55.398657   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:55.418564   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	W0721 18:06:55.418678   10001 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	
	W0721 18:06:55.418692   10001 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:55.418702   10001 start.go:128] duration metric: took 6m3.037459488s to createHost
	I0721 18:06:55.418780   10001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 18:06:55.418841   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:55.436412   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:55.436514   10001 retry.go:31] will retry after 323.897499ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:55.762896   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:55.782906   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:55.783004   10001 retry.go:31] will retry after 189.425355ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:55.973975   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:55.993708   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:55.993806   10001 retry.go:31] will retry after 822.091478ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:56.818090   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:56.838484   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	W0721 18:06:56.838593   10001 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	
	W0721 18:06:56.838607   10001 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:56.838667   10001 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 18:06:56.838725   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:56.856271   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:56.856367   10001 retry.go:31] will retry after 177.881361ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:57.036652   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:57.057265   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:57.057359   10001 retry.go:31] will retry after 374.405727ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:57.433348   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:57.453443   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:57.453535   10001 retry.go:31] will retry after 631.7064ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:58.087348   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:58.107334   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	I0721 18:06:58.107432   10001 retry.go:31] will retry after 716.321529ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:58.826259   10001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000
	W0721 18:06:58.845671   10001 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000 returned with exit code 1
	W0721 18:06:58.845769   10001 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	
	W0721 18:06:58.845783   10001 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-989000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-989000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000
	I0721 18:06:58.845792   10001 fix.go:56] duration metric: took 6m23.438867211s for fixHost
	I0721 18:06:58.845798   10001 start.go:83] releasing machines lock for "offline-docker-989000", held for 6m23.438915572s
	W0721 18:06:58.845871   10001 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-989000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-989000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0721 18:06:58.889463   10001 out.go:177] 
	W0721 18:06:58.911258   10001 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0721 18:06:58.911302   10001 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0721 18:06:58.911370   10001 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0721 18:06:58.932415   10001 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-989000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-07-21 18:06:59.006756 -0700 PDT m=+6177.740751317
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-989000
helpers_test.go:235: (dbg) docker inspect offline-docker-989000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-989000",
	        "Id": "2764cabec0cb1342bb500ae7ee58c8d00bc57327d75ba499dc22b6177aa556d4",
	        "Created": "2024-07-22T01:00:52.419866952Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-989000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-989000 -n offline-docker-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-989000 -n offline-docker-989000: exit status 7 (74.733493ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 18:06:59.101233   10661 status.go:249] status error: host: state: unknown state "offline-docker-989000": docker container inspect offline-docker-989000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-989000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-989000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-989000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-989000
--- FAIL: TestOffline (754.02s)

                                                
                                    
x
+
TestCertOptions (7201.707s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-222000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (4m27s)
	TestCertOptions (3m49s)
	TestNetworkPlugins (29m36s)

                                                
                                                
goroutine 2560 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0005fb040, 0xc0008f1bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000010a08, {0x1406fae0, 0x2a, 0x2a}, {0xfb47825?, 0x11680064?, 0x14092aa0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0005ee280)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0005ee280)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0005a4d00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2557 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x5b9589e0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0019c2300?, 0xc001775a8d?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0019c2300, {0xc001775a8d, 0x573, 0x573})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001e3c078, {0xc001775a8d?, 0xc000105340?, 0x223?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ce6480, {0x12ce9ad8, 0xc001f88128})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x12ce9c18, 0xc001ce6480}, {0x12ce9ad8, 0xc001f88128}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000a77678?, {0x12ce9c18, 0xc001ce6480})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000a77738?, {0x12ce9c18?, 0xc001ce6480?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x12ce9c18, 0xc001ce6480}, {0x12ce9b98, 0xc001e3c078}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001f463c0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 651
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2248 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b269c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b269c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b269c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b269c0, 0xc001e7e100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2246
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2237 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020fd040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020fd040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0020fd040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc0020fd040, 0x12cdee80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 651 [syscall, 3 minutes]:
syscall.syscall6(0xc001ce7f80?, 0x1000000000010?, 0x10000000019?, 0x5ba5f1a0?, 0x90?, 0x149b3108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000af98a0?, 0xfa880c5?, 0x90?, 0x12c4b420?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xfbb89e5?, 0xc000af98d4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc001ce82a0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000204c00)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000204c00)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001b26d00, 0xc000204c00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc001b26d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc001b26d00, 0x12cdedd8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2251 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b276c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b276c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b276c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b276c0, 0xc001e7e280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2246
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 68 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 67
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 2253 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b27d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b27d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b27d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b27d40, 0xc001e7e400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2246
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2559 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc000204c00, 0xc000a0c3c0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 651
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2230 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020fcb60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020fcb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0020fcb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc0020fcb60, 0x12cdef00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2558 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x5b958ad8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0019c23c0?, 0xc0013a0e00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0019c23c0, {0xc0013a0e00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001e3c090, {0xc0013a0e00?, 0x9?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ce64b0, {0x12ce9ad8, 0xc001f88130})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x12ce9c18, 0xc001ce64b0}, {0x12ce9ad8, 0xc001f88130}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x12ce9c18, 0xc001ce64b0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xfa7fa3e?, {0x12ce9c18?, 0xc001ce64b0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x12ce9c18, 0xc001ce64b0}, {0x12ce9b98, 0xc001e3c090}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001457080?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 651
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2551 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x5b958600, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0019c2a20?, 0xc001775296?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0019c2a20, {0xc001775296, 0x56a, 0x56a})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001e3c0e0, {0xc001775296?, 0xc000585a40?, 0x22c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ce6630, {0x12ce9ad8, 0xc001f88090})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x12ce9c18, 0xc001ce6630}, {0x12ce9ad8, 0xc001f88090}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000a7d678?, {0x12ce9c18, 0xc001ce6630})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000a7d738?, {0x12ce9c18?, 0xc001ce6630?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x12ce9c18, 0xc001ce6630}, {0x12ce9b98, 0xc001e3c0e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001f46720?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 652
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 652 [syscall, 4 minutes]:
syscall.syscall6(0xc001ce7f80?, 0x1000000000010?, 0x10000000019?, 0x5ba5f1a0?, 0x90?, 0x149b3108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc001449a40?, 0xfa880c5?, 0x90?, 0x12c4b420?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xfbb89e5?, 0xc001449a74, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc001ce8330)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000205680)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000205680)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001b26ea0, 0xc000205680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc001b26ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc001b26ea0, 0x12cdedd0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2252 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b27ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b27ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b27ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b27ba0, 0xc001e7e300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2246
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1877 [syscall, 95 minutes]:
syscall.syscall(0x0?, 0xc0017fe390?, 0xc000a797b0?, 0xfc01c95?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc0020781b0?, 0x1?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1857
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 2246 [chan receive, 30 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001b26680, 0xc00142e1b0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2154
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2553 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc000205680, 0xc001f467e0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 652
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 187 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0013eed20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 142
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 188 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000680800, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 142
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2249 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b26b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b26b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b26b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b26b60, 0xc001e7e180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2246
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 191 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc0006800d0, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x127d4440?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0013eec00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000680800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000496cb0, {0x12ceb0c0, 0xc000909dd0}, 0x1, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000496cb0, 0x3b9aca00, 0x0, 0x1, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 188
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 192 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x12d0edc0, 0xc0000662a0}, 0xc000110750, 0xc000a6df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x12d0edc0, 0xc0000662a0}, 0xd?, 0xc000110750, 0xc000110798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x12d0edc0?, 0xc0000662a0?}, 0xc0005faea0?, 0xfbbb6a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xfbbc605?, 0xc0005faea0?, 0xc000b040c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 188
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 193 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 192
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2255 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015141a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015141a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015141a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0015141a0, 0xc001e7e500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2246
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2552 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x5b9586f8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0019c2ae0?, 0xc0013a0600?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0019c2ae0, {0xc0013a0600, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001e3c0f8, {0xc0013a0600?, 0xfa8b514?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ce6660, {0x12ce9ad8, 0xc001f880a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x12ce9c18, 0xc001ce6660}, {0x12ce9ad8, 0xc001f880a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x12ce9c18, 0xc001ce6660})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00021cfc0?, {0x12ce9c18?, 0xc001ce6660?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x12ce9c18, 0xc001ce6660}, {0x12ce9b98, 0xc001e3c0f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0018e6600?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 652
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2154 [chan receive, 30 minutes]:
testing.(*T).Run(0xc0020fc1a0, {0x116265ca?, 0x54668a3a5b2?}, 0xc00142e1b0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0020fc1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0020fc1a0, 0x12cdeeb8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2247 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b26820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b26820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b26820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b26820, 0xc001e7e080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2246
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2238 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020fd1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020fd1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc0020fd1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc0020fd1e0, 0x12cdee98)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 902 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 901
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 739 [IO wait, 111 minutes]:
internal/poll.runtime_pollWait(0x5b958bd0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0016c2200?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0016c2200)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0016c2200)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0009ae500)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0009ae500)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0005160f0, {0x12d01cb0, 0xc0009ae500})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0005160f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0013aa680?, 0xc0013aa9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 720
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2235 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020fc000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020fc000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0020fc000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc0020fc000, 0x12cdeee0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 885 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000802240, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 864
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1232 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001fba600, 0xc001e83c80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 851
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 940 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc00183b800, 0xc000067ce0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 939
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2155 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020fc340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020fc340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0020fc340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc0020fc340, 0x12cdeec0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 884 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00181f080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 864
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1151 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001d6d080, 0xc001d6aa80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1150
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 901 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x12d0edc0, 0xc0000662a0}, 0xc000093f50, 0xc000995f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x12d0edc0, 0xc0000662a0}, 0x40?, 0xc000093f50, 0xc000093f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x12d0edc0?, 0xc0000662a0?}, 0xc0013aad00?, 0xfbbb6a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000093fd0?, 0xfc019a4?, 0xc000a0c840?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 885
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2236 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020fcea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020fcea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0020fcea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc0020fcea0, 0x12cdef08)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2156 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020fc820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020fc820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0020fc820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0020fc820, 0x12cdeed0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 900 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000802150, 0x2b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x127d4440?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00181ef60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000802240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000618420, {0x12ceb0c0, 0xc001458030}, 0x1, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000618420, 0x3b9aca00, 0x0, 0x1, 0xc0000662a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 885
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1254 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc001e57680)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1245
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2250 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001b27520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001b27520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001b27520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001b27520, 0xc001e7e200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2246
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1183 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001ebcf00, 0xc001e82fc0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1182
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1255 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc001e57680)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1245
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2254 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc0005c7d10)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001514000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001514000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001514000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001514000, 0xc001e7e480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2246
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    
x
+
TestDockerFlags (755.93s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-119000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0721 18:09:14.779253    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 18:09:54.545063    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 18:13:57.833063    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 18:14:14.776822    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 18:14:54.542791    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 18:19:14.774125    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-119000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m35.061920101s)

                                                
                                                
-- stdout --
	* [docker-flags-119000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-119000" primary control-plane node in "docker-flags-119000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-119000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 18:07:36.247132   10752 out.go:291] Setting OutFile to fd 1 ...
	I0721 18:07:36.247406   10752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 18:07:36.247411   10752 out.go:304] Setting ErrFile to fd 2...
	I0721 18:07:36.247415   10752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 18:07:36.247605   10752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 18:07:36.249191   10752 out.go:298] Setting JSON to false
	I0721 18:07:36.271686   10752 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7625,"bootTime":1721602831,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0721 18:07:36.271771   10752 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 18:07:36.294823   10752 out.go:177] * [docker-flags-119000] minikube v1.33.1 on Darwin 14.5
	I0721 18:07:36.337589   10752 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 18:07:36.337616   10752 notify.go:220] Checking for updates...
	I0721 18:07:36.380180   10752 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	I0721 18:07:36.401510   10752 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0721 18:07:36.422464   10752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 18:07:36.443408   10752 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	I0721 18:07:36.464490   10752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 18:07:36.486095   10752 config.go:182] Loaded profile config "force-systemd-flag-246000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 18:07:36.486210   10752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 18:07:36.509463   10752 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0721 18:07:36.509796   10752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 18:07:36.589091   10752 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:115 OomKillDisable:false NGoroutines:230 SystemTime:2024-07-22 01:07:36.580339194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 18:07:36.631832   10752 out.go:177] * Using the docker driver based on user configuration
	I0721 18:07:36.652925   10752 start.go:297] selected driver: docker
	I0721 18:07:36.652955   10752 start.go:901] validating driver "docker" against <nil>
	I0721 18:07:36.652970   10752 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 18:07:36.656619   10752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 18:07:36.735165   10752 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:115 OomKillDisable:false NGoroutines:230 SystemTime:2024-07-22 01:07:36.726727593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 18:07:36.735373   10752 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 18:07:36.735554   10752 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0721 18:07:36.757276   10752 out.go:177] * Using Docker Desktop driver with root privileges
	I0721 18:07:36.779039   10752 cni.go:84] Creating CNI manager for ""
	I0721 18:07:36.779092   10752 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 18:07:36.779099   10752 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 18:07:36.779158   10752 start.go:340] cluster config:
	{Name:docker-flags-119000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-119000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 18:07:36.801071   10752 out.go:177] * Starting "docker-flags-119000" primary control-plane node in "docker-flags-119000" cluster
	I0721 18:07:36.843192   10752 cache.go:121] Beginning downloading kic base image for docker with docker
	I0721 18:07:36.865127   10752 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0721 18:07:36.907060   10752 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 18:07:36.907113   10752 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0721 18:07:36.907137   10752 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 18:07:36.907170   10752 cache.go:56] Caching tarball of preloaded images
	I0721 18:07:36.907414   10752 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 18:07:36.907434   10752 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 18:07:36.907604   10752 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/docker-flags-119000/config.json ...
	I0721 18:07:36.908271   10752 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/docker-flags-119000/config.json: {Name:mkecc3f3e1c7d2a96943d975334d6c825542855c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0721 18:07:36.932876   10752 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0721 18:07:36.932899   10752 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0721 18:07:36.933015   10752 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0721 18:07:36.933032   10752 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0721 18:07:36.933038   10752 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0721 18:07:36.933047   10752 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0721 18:07:36.933052   10752 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0721 18:07:36.936140   10752 image.go:273] response: 
	I0721 18:07:37.578292   10752 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0721 18:07:37.578348   10752 cache.go:194] Successfully downloaded all kic artifacts
	I0721 18:07:37.578397   10752 start.go:360] acquireMachinesLock for docker-flags-119000: {Name:mkcaf72638d244b9bbb117caeb9d8c6caad09330 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 18:07:37.578576   10752 start.go:364] duration metric: took 167.201µs to acquireMachinesLock for "docker-flags-119000"
	I0721 18:07:37.578603   10752 start.go:93] Provisioning new machine with config: &{Name:docker-flags-119000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-119000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 18:07:37.578680   10752 start.go:125] createHost starting for "" (driver="docker")
	I0721 18:07:37.620981   10752 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0721 18:07:37.621172   10752 start.go:159] libmachine.API.Create for "docker-flags-119000" (driver="docker")
	I0721 18:07:37.621202   10752 client.go:168] LocalClient.Create starting
	I0721 18:07:37.621303   10752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 18:07:37.621356   10752 main.go:141] libmachine: Decoding PEM data...
	I0721 18:07:37.621373   10752 main.go:141] libmachine: Parsing certificate...
	I0721 18:07:37.621423   10752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 18:07:37.621467   10752 main.go:141] libmachine: Decoding PEM data...
	I0721 18:07:37.621476   10752 main.go:141] libmachine: Parsing certificate...
	I0721 18:07:37.622035   10752 cli_runner.go:164] Run: docker network inspect docker-flags-119000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 18:07:37.641510   10752 cli_runner.go:211] docker network inspect docker-flags-119000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 18:07:37.641615   10752 network_create.go:284] running [docker network inspect docker-flags-119000] to gather additional debugging logs...
	I0721 18:07:37.641630   10752 cli_runner.go:164] Run: docker network inspect docker-flags-119000
	W0721 18:07:37.658793   10752 cli_runner.go:211] docker network inspect docker-flags-119000 returned with exit code 1
	I0721 18:07:37.658828   10752 network_create.go:287] error running [docker network inspect docker-flags-119000]: docker network inspect docker-flags-119000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-119000 not found
	I0721 18:07:37.658843   10752 network_create.go:289] output of [docker network inspect docker-flags-119000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-119000 not found
	
	** /stderr **
	I0721 18:07:37.658960   10752 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 18:07:37.677952   10752 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:07:37.679532   10752 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:07:37.680951   10752 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:07:37.682352   10752 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:07:37.682682   10752 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001680b30}
	I0721 18:07:37.682696   10752 network_create.go:124] attempt to create docker network docker-flags-119000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0721 18:07:37.682765   10752 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-119000 docker-flags-119000
	I0721 18:07:37.746981   10752 network_create.go:108] docker network docker-flags-119000 192.168.85.0/24 created
	I0721 18:07:37.747021   10752 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-119000" container
	I0721 18:07:37.747125   10752 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 18:07:37.766906   10752 cli_runner.go:164] Run: docker volume create docker-flags-119000 --label name.minikube.sigs.k8s.io=docker-flags-119000 --label created_by.minikube.sigs.k8s.io=true
	I0721 18:07:37.785266   10752 oci.go:103] Successfully created a docker volume docker-flags-119000
	I0721 18:07:37.785371   10752 cli_runner.go:164] Run: docker run --rm --name docker-flags-119000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-119000 --entrypoint /usr/bin/test -v docker-flags-119000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 18:07:38.204542   10752 oci.go:107] Successfully prepared a docker volume docker-flags-119000
	I0721 18:07:38.204590   10752 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 18:07:38.204607   10752 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 18:07:38.204742   10752 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-119000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0721 18:13:37.620238   10752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 18:13:37.620352   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:13:37.640494   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:13:37.640614   10752 retry.go:31] will retry after 359.601993ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:38.000899   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:13:38.021265   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:13:38.021355   10752 retry.go:31] will retry after 273.074476ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:38.294747   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:13:38.313769   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:13:38.313873   10752 retry.go:31] will retry after 412.529033ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:38.727453   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:13:38.747385   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:13:38.747487   10752 retry.go:31] will retry after 674.572999ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:39.423965   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:13:39.444662   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	W0721 18:13:39.444771   10752 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	
	W0721 18:13:39.444794   10752 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:39.444854   10752 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 18:13:39.444916   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:13:39.462457   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:13:39.462550   10752 retry.go:31] will retry after 213.244886ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:39.676818   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:13:39.697167   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:13:39.697265   10752 retry.go:31] will retry after 475.427043ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:40.175133   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:13:40.194553   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:13:40.194641   10752 retry.go:31] will retry after 765.037732ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:40.960233   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:13:40.979464   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	W0721 18:13:40.979559   10752 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	
	W0721 18:13:40.979585   10752 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:40.979601   10752 start.go:128] duration metric: took 6m3.404034374s to createHost
	I0721 18:13:40.979607   10752 start.go:83] releasing machines lock for "docker-flags-119000", held for 6m3.404149774s
	W0721 18:13:40.979621   10752 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0721 18:13:40.980086   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:13:40.997092   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:40.997153   10752 delete.go:82] Unable to get host status for docker-flags-119000, assuming it has already been deleted: state: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	W0721 18:13:40.997245   10752 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0721 18:13:40.997255   10752 start.go:729] Will try again in 5 seconds ...
	I0721 18:13:45.998608   10752 start.go:360] acquireMachinesLock for docker-flags-119000: {Name:mkcaf72638d244b9bbb117caeb9d8c6caad09330 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 18:13:45.999535   10752 start.go:364] duration metric: took 812.63µs to acquireMachinesLock for "docker-flags-119000"
	I0721 18:13:45.999617   10752 start.go:96] Skipping create...Using existing machine configuration
	I0721 18:13:45.999637   10752 fix.go:54] fixHost starting: 
	I0721 18:13:46.000209   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:13:46.019956   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:46.020004   10752 fix.go:112] recreateIfNeeded on docker-flags-119000: state= err=unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:46.020023   10752 fix.go:117] machineExists: false. err=machine does not exist
	I0721 18:13:46.041958   10752 out.go:177] * docker "docker-flags-119000" container is missing, will recreate.
	I0721 18:13:46.063661   10752 delete.go:124] DEMOLISHING docker-flags-119000 ...
	I0721 18:13:46.063834   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:13:46.081943   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	W0721 18:13:46.082006   10752 stop.go:83] unable to get state: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:46.082030   10752 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:46.082409   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:13:46.099536   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:46.099589   10752 delete.go:82] Unable to get host status for docker-flags-119000, assuming it has already been deleted: state: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:46.099678   10752 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-119000
	W0721 18:13:46.116617   10752 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-119000 returned with exit code 1
	I0721 18:13:46.116652   10752 kic.go:371] could not find the container docker-flags-119000 to remove it. will try anyways
	I0721 18:13:46.116731   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:13:46.134008   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	W0721 18:13:46.134055   10752 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:46.134149   10752 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-119000 /bin/bash -c "sudo init 0"
	W0721 18:13:46.151344   10752 cli_runner.go:211] docker exec --privileged -t docker-flags-119000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0721 18:13:46.151384   10752 oci.go:650] error shutdown docker-flags-119000: docker exec --privileged -t docker-flags-119000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:47.151921   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:13:47.171022   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:47.171088   10752 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:47.171098   10752 oci.go:664] temporary error: container docker-flags-119000 status is  but expect it to be exited
	I0721 18:13:47.171121   10752 retry.go:31] will retry after 682.661335ms: couldn't verify container is exited. %v: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:47.854165   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:13:47.873574   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:47.873623   10752 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:47.873636   10752 oci.go:664] temporary error: container docker-flags-119000 status is  but expect it to be exited
	I0721 18:13:47.873663   10752 retry.go:31] will retry after 812.379126ms: couldn't verify container is exited. %v: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:48.688446   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:13:48.708426   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:48.708472   10752 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:48.708487   10752 oci.go:664] temporary error: container docker-flags-119000 status is  but expect it to be exited
	I0721 18:13:48.708513   10752 retry.go:31] will retry after 1.655776014s: couldn't verify container is exited. %v: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:50.364607   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:13:50.382956   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:50.383004   10752 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:50.383014   10752 oci.go:664] temporary error: container docker-flags-119000 status is  but expect it to be exited
	I0721 18:13:50.383037   10752 retry.go:31] will retry after 2.291632783s: couldn't verify container is exited. %v: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:52.675014   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:13:52.695618   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:52.695665   10752 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:52.695674   10752 oci.go:664] temporary error: container docker-flags-119000 status is  but expect it to be exited
	I0721 18:13:52.695698   10752 retry.go:31] will retry after 3.45887037s: couldn't verify container is exited. %v: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:56.155038   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:13:56.172725   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:56.172774   10752 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:13:56.172784   10752 oci.go:664] temporary error: container docker-flags-119000 status is  but expect it to be exited
	I0721 18:13:56.172815   10752 retry.go:31] will retry after 4.83041921s: couldn't verify container is exited. %v: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:14:01.004941   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:14:01.024653   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	I0721 18:14:01.024703   10752 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:14:01.024714   10752 oci.go:664] temporary error: container docker-flags-119000 status is  but expect it to be exited
	I0721 18:14:01.024736   10752 retry.go:31] will retry after 3.732853976s: couldn't verify container is exited. %v: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:14:04.757859   10752 cli_runner.go:164] Run: docker container inspect docker-flags-119000 --format={{.State.Status}}
	W0721 18:14:04.776952   10752 cli_runner.go:211] docker container inspect docker-flags-119000 --format={{.State.Status}} returned with exit code 1
	I0721 18:14:04.776998   10752 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:14:04.777008   10752 oci.go:664] temporary error: container docker-flags-119000 status is  but expect it to be exited
	I0721 18:14:04.777041   10752 oci.go:88] couldn't shut down docker-flags-119000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	 
	I0721 18:14:04.777120   10752 cli_runner.go:164] Run: docker rm -f -v docker-flags-119000
	I0721 18:14:04.794803   10752 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-119000
	W0721 18:14:04.812291   10752 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-119000 returned with exit code 1
	I0721 18:14:04.812414   10752 cli_runner.go:164] Run: docker network inspect docker-flags-119000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 18:14:04.829966   10752 cli_runner.go:164] Run: docker network rm docker-flags-119000
	I0721 18:14:04.908357   10752 fix.go:124] Sleeping 1 second for extra luck!
	I0721 18:14:05.910621   10752 start.go:125] createHost starting for "" (driver="docker")
	I0721 18:14:05.933788   10752 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0721 18:14:05.934003   10752 start.go:159] libmachine.API.Create for "docker-flags-119000" (driver="docker")
	I0721 18:14:05.934030   10752 client.go:168] LocalClient.Create starting
	I0721 18:14:05.934247   10752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 18:14:05.934361   10752 main.go:141] libmachine: Decoding PEM data...
	I0721 18:14:05.934390   10752 main.go:141] libmachine: Parsing certificate...
	I0721 18:14:05.934466   10752 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 18:14:05.934546   10752 main.go:141] libmachine: Decoding PEM data...
	I0721 18:14:05.934561   10752 main.go:141] libmachine: Parsing certificate...
	I0721 18:14:05.935920   10752 cli_runner.go:164] Run: docker network inspect docker-flags-119000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 18:14:05.954677   10752 cli_runner.go:211] docker network inspect docker-flags-119000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 18:14:05.954784   10752 network_create.go:284] running [docker network inspect docker-flags-119000] to gather additional debugging logs...
	I0721 18:14:05.954806   10752 cli_runner.go:164] Run: docker network inspect docker-flags-119000
	W0721 18:14:05.972408   10752 cli_runner.go:211] docker network inspect docker-flags-119000 returned with exit code 1
	I0721 18:14:05.972446   10752 network_create.go:287] error running [docker network inspect docker-flags-119000]: docker network inspect docker-flags-119000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-119000 not found
	I0721 18:14:05.972461   10752 network_create.go:289] output of [docker network inspect docker-flags-119000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-119000 not found
	
	** /stderr **
	I0721 18:14:05.972604   10752 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 18:14:05.992012   10752 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:14:05.993651   10752 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:14:05.995229   10752 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:14:05.997070   10752 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:14:05.998713   10752 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:14:06.000543   10752 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:14:06.001245   10752 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014dee60}
	I0721 18:14:06.001274   10752 network_create.go:124] attempt to create docker network docker-flags-119000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0721 18:14:06.001388   10752 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-119000 docker-flags-119000
	I0721 18:14:06.065579   10752 network_create.go:108] docker network docker-flags-119000 192.168.103.0/24 created
	I0721 18:14:06.065616   10752 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-119000" container
	I0721 18:14:06.065726   10752 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 18:14:06.085737   10752 cli_runner.go:164] Run: docker volume create docker-flags-119000 --label name.minikube.sigs.k8s.io=docker-flags-119000 --label created_by.minikube.sigs.k8s.io=true
	I0721 18:14:06.102821   10752 oci.go:103] Successfully created a docker volume docker-flags-119000
	I0721 18:14:06.102942   10752 cli_runner.go:164] Run: docker run --rm --name docker-flags-119000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-119000 --entrypoint /usr/bin/test -v docker-flags-119000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 18:14:06.382588   10752 oci.go:107] Successfully prepared a docker volume docker-flags-119000
	I0721 18:14:06.382620   10752 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 18:14:06.382633   10752 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 18:14:06.382742   10752 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-119000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0721 18:20:05.933282   10752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 18:20:05.933410   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:05.953394   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:05.953506   10752 retry.go:31] will retry after 156.330781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:06.110811   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:06.131662   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:06.131757   10752 retry.go:31] will retry after 218.865688ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:06.352237   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:06.371336   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:06.371438   10752 retry.go:31] will retry after 564.805417ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:06.936757   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:06.999061   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	W0721 18:20:06.999162   10752 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	
	W0721 18:20:06.999185   10752 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:06.999240   10752 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 18:20:06.999293   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:07.016517   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:07.016608   10752 retry.go:31] will retry after 184.141559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:07.203161   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:07.223170   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:07.223277   10752 retry.go:31] will retry after 433.997968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:07.659626   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:07.679621   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:07.679729   10752 retry.go:31] will retry after 398.914986ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:08.080552   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:08.099967   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	W0721 18:20:08.100071   10752 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	
	W0721 18:20:08.100094   10752 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:08.100105   10752 start.go:128] duration metric: took 6m2.192576572s to createHost
	I0721 18:20:08.100190   10752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 18:20:08.100245   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:08.118198   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:08.118288   10752 retry.go:31] will retry after 218.795849ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:08.338252   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:08.358381   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:08.358489   10752 retry.go:31] will retry after 304.341239ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:08.664420   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:08.683942   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:08.684032   10752 retry.go:31] will retry after 467.882492ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:09.153513   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:09.172933   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:09.173024   10752 retry.go:31] will retry after 859.399687ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:10.032849   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:10.052892   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	W0721 18:20:10.052995   10752 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	
	W0721 18:20:10.053015   10752 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:10.053074   10752 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 18:20:10.053128   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:10.071016   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:10.071106   10752 retry.go:31] will retry after 215.996088ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:10.288496   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:10.308470   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:10.308565   10752 retry.go:31] will retry after 373.227782ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:10.683133   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:10.702922   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	I0721 18:20:10.703016   10752 retry.go:31] will retry after 355.568805ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:11.060332   10752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000
	W0721 18:20:11.079893   10752 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000 returned with exit code 1
	W0721 18:20:11.079996   10752 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	
	W0721 18:20:11.080019   10752 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-119000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-119000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	I0721 18:20:11.080033   10752 fix.go:56] duration metric: took 6m25.08371309s for fixHost
	I0721 18:20:11.080039   10752 start.go:83] releasing machines lock for "docker-flags-119000", held for 6m25.083772227s
	W0721 18:20:11.080115   10752 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-119000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-119000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0721 18:20:11.122675   10752 out.go:177] 
	W0721 18:20:11.144570   10752 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0721 18:20:11.144622   10752 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0721 18:20:11.144646   10752 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0721 18:20:11.166419   10752 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-119000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-119000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-119000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (160.817068ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-119000 host status: state: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-119000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-119000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-119000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (161.13259ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-119000 host status: state: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-119000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-119000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-21 18:20:11.585474 -0700 PDT m=+6970.326286091
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-119000
helpers_test.go:235: (dbg) docker inspect docker-flags-119000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-119000",
	        "Id": "4b5b7182cacc7263b1d14e2fe822e60f8aa6eada167bbc8371ab9b144b0cfe66",
	        "Created": "2024-07-22T01:14:06.017931688Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-119000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-119000 -n docker-flags-119000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-119000 -n docker-flags-119000: exit status 7 (73.147677ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 18:20:11.678335   11045 status.go:249] status error: host: state: unknown state "docker-flags-119000": docker container inspect docker-flags-119000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-119000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-119000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-119000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-119000
--- FAIL: TestDockerFlags (755.93s)

                                                
                                    
x
+
TestForceSystemdFlag (754.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-246000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-246000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m33.844220227s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-246000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-246000" primary control-plane node in "force-systemd-flag-246000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-246000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 18:06:59.601156   10675 out.go:291] Setting OutFile to fd 1 ...
	I0721 18:06:59.601339   10675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 18:06:59.601344   10675 out.go:304] Setting ErrFile to fd 2...
	I0721 18:06:59.601348   10675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 18:06:59.601527   10675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 18:06:59.602916   10675 out.go:298] Setting JSON to false
	I0721 18:06:59.625359   10675 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7588,"bootTime":1721602831,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0721 18:06:59.625476   10675 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 18:06:59.647490   10675 out.go:177] * [force-systemd-flag-246000] minikube v1.33.1 on Darwin 14.5
	I0721 18:06:59.690062   10675 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 18:06:59.690129   10675 notify.go:220] Checking for updates...
	I0721 18:06:59.733024   10675 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	I0721 18:06:59.756049   10675 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0721 18:06:59.777177   10675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 18:06:59.798243   10675 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	I0721 18:06:59.819016   10675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 18:06:59.841084   10675 config.go:182] Loaded profile config "force-systemd-env-268000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 18:06:59.841263   10675 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 18:06:59.865442   10675 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0721 18:06:59.865620   10675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 18:06:59.947927   10675 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:111 OomKillDisable:false NGoroutines:218 SystemTime:2024-07-22 01:06:59.938951832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 18:06:59.990287   10675 out.go:177] * Using the docker driver based on user configuration
	I0721 18:07:00.011573   10675 start.go:297] selected driver: docker
	I0721 18:07:00.011603   10675 start.go:901] validating driver "docker" against <nil>
	I0721 18:07:00.011618   10675 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 18:07:00.016190   10675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 18:07:00.094878   10675 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:111 OomKillDisable:false NGoroutines:218 SystemTime:2024-07-22 01:07:00.086320304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 18:07:00.095090   10675 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 18:07:00.095313   10675 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 18:07:00.117070   10675 out.go:177] * Using Docker Desktop driver with root privileges
	I0721 18:07:00.138616   10675 cni.go:84] Creating CNI manager for ""
	I0721 18:07:00.138660   10675 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 18:07:00.138674   10675 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 18:07:00.138764   10675 start.go:340] cluster config:
	{Name:force-systemd-flag-246000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-246000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 18:07:00.160932   10675 out.go:177] * Starting "force-systemd-flag-246000" primary control-plane node in "force-systemd-flag-246000" cluster
	I0721 18:07:00.202663   10675 cache.go:121] Beginning downloading kic base image for docker with docker
	I0721 18:07:00.223857   10675 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0721 18:07:00.265754   10675 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 18:07:00.265806   10675 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0721 18:07:00.265845   10675 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 18:07:00.265870   10675 cache.go:56] Caching tarball of preloaded images
	I0721 18:07:00.266110   10675 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 18:07:00.266130   10675 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 18:07:00.266947   10675 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/force-systemd-flag-246000/config.json ...
	I0721 18:07:00.267152   10675 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/force-systemd-flag-246000/config.json: {Name:mk68906409cc5f7b25e28f22e7aa55078c3ee381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0721 18:07:00.292080   10675 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0721 18:07:00.292093   10675 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0721 18:07:00.292206   10675 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0721 18:07:00.292229   10675 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0721 18:07:00.292235   10675 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0721 18:07:00.292246   10675 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0721 18:07:00.292251   10675 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0721 18:07:00.295405   10675 image.go:273] response: 
	I0721 18:07:00.949833   10675 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0721 18:07:00.949883   10675 cache.go:194] Successfully downloaded all kic artifacts
	I0721 18:07:00.949931   10675 start.go:360] acquireMachinesLock for force-systemd-flag-246000: {Name:mk75864c8caf63b8caef2a61d4aef05092f8625e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 18:07:00.950102   10675 start.go:364] duration metric: took 158.444µs to acquireMachinesLock for "force-systemd-flag-246000"
	I0721 18:07:00.950144   10675 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-246000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-246000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 18:07:00.950201   10675 start.go:125] createHost starting for "" (driver="docker")
	I0721 18:07:00.992657   10675 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0721 18:07:00.992843   10675 start.go:159] libmachine.API.Create for "force-systemd-flag-246000" (driver="docker")
	I0721 18:07:00.992872   10675 client.go:168] LocalClient.Create starting
	I0721 18:07:00.992980   10675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 18:07:00.993033   10675 main.go:141] libmachine: Decoding PEM data...
	I0721 18:07:00.993052   10675 main.go:141] libmachine: Parsing certificate...
	I0721 18:07:00.993112   10675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 18:07:00.993150   10675 main.go:141] libmachine: Decoding PEM data...
	I0721 18:07:00.993157   10675 main.go:141] libmachine: Parsing certificate...
	I0721 18:07:00.993670   10675 cli_runner.go:164] Run: docker network inspect force-systemd-flag-246000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 18:07:01.011293   10675 cli_runner.go:211] docker network inspect force-systemd-flag-246000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 18:07:01.011392   10675 network_create.go:284] running [docker network inspect force-systemd-flag-246000] to gather additional debugging logs...
	I0721 18:07:01.011409   10675 cli_runner.go:164] Run: docker network inspect force-systemd-flag-246000
	W0721 18:07:01.029120   10675 cli_runner.go:211] docker network inspect force-systemd-flag-246000 returned with exit code 1
	I0721 18:07:01.029154   10675 network_create.go:287] error running [docker network inspect force-systemd-flag-246000]: docker network inspect force-systemd-flag-246000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-246000 not found
	I0721 18:07:01.029169   10675 network_create.go:289] output of [docker network inspect force-systemd-flag-246000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-246000 not found
	
	** /stderr **
	I0721 18:07:01.029292   10675 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 18:07:01.048573   10675 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:07:01.049955   10675 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:07:01.050314   10675 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001616d80}
	I0721 18:07:01.050331   10675 network_create.go:124] attempt to create docker network force-systemd-flag-246000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0721 18:07:01.050408   10675 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-246000 force-systemd-flag-246000
	W0721 18:07:01.068432   10675 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-246000 force-systemd-flag-246000 returned with exit code 1
	W0721 18:07:01.068469   10675 network_create.go:149] failed to create docker network force-systemd-flag-246000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-246000 force-systemd-flag-246000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0721 18:07:01.068491   10675 network_create.go:116] failed to create docker network force-systemd-flag-246000 192.168.67.0/24, will retry: subnet is taken
	I0721 18:07:01.069886   10675 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:07:01.070284   10675 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00147bf00}
	I0721 18:07:01.070296   10675 network_create.go:124] attempt to create docker network force-systemd-flag-246000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0721 18:07:01.070369   10675 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-246000 force-systemd-flag-246000
	I0721 18:07:01.134158   10675 network_create.go:108] docker network force-systemd-flag-246000 192.168.76.0/24 created
	I0721 18:07:01.134198   10675 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-246000" container
	I0721 18:07:01.134341   10675 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 18:07:01.154228   10675 cli_runner.go:164] Run: docker volume create force-systemd-flag-246000 --label name.minikube.sigs.k8s.io=force-systemd-flag-246000 --label created_by.minikube.sigs.k8s.io=true
	I0721 18:07:01.172504   10675 oci.go:103] Successfully created a docker volume force-systemd-flag-246000
	I0721 18:07:01.172618   10675 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-246000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-246000 --entrypoint /usr/bin/test -v force-systemd-flag-246000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 18:07:01.586820   10675 oci.go:107] Successfully prepared a docker volume force-systemd-flag-246000
	I0721 18:07:01.586875   10675 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 18:07:01.586889   10675 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 18:07:01.586985   10675 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-246000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0721 18:13:00.990146   10675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 18:13:00.990291   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:13:01.010510   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:13:01.010641   10675 retry.go:31] will retry after 311.656831ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:01.324684   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:13:01.344974   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:13:01.345105   10675 retry.go:31] will retry after 261.343142ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:01.608950   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:13:01.628239   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:13:01.628334   10675 retry.go:31] will retry after 682.988975ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:02.313595   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:13:02.334031   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	W0721 18:13:02.334153   10675 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	
	W0721 18:13:02.334178   10675 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:02.334240   10675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 18:13:02.334304   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:13:02.352003   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:13:02.352098   10675 retry.go:31] will retry after 140.758315ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:02.495079   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:13:02.515053   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:13:02.515152   10675 retry.go:31] will retry after 535.926223ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:03.053488   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:13:03.072226   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:13:03.072315   10675 retry.go:31] will retry after 375.842125ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:03.450603   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:13:03.471134   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	W0721 18:13:03.471230   10675 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	
	W0721 18:13:03.471242   10675 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:03.471265   10675 start.go:128] duration metric: took 6m2.524163602s to createHost
	I0721 18:13:03.471273   10675 start.go:83] releasing machines lock for "force-systemd-flag-246000", held for 6m2.524282118s
	W0721 18:13:03.471288   10675 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0721 18:13:03.471718   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:03.488941   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:03.489005   10675 delete.go:82] Unable to get host status for force-systemd-flag-246000, assuming it has already been deleted: state: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	W0721 18:13:03.489105   10675 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0721 18:13:03.489121   10675 start.go:729] Will try again in 5 seconds ...
	I0721 18:13:08.490348   10675 start.go:360] acquireMachinesLock for force-systemd-flag-246000: {Name:mk75864c8caf63b8caef2a61d4aef05092f8625e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 18:13:08.490556   10675 start.go:364] duration metric: took 159.74µs to acquireMachinesLock for "force-systemd-flag-246000"
	I0721 18:13:08.490595   10675 start.go:96] Skipping create...Using existing machine configuration
	I0721 18:13:08.490612   10675 fix.go:54] fixHost starting: 
	I0721 18:13:08.491004   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:08.511798   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:08.511848   10675 fix.go:112] recreateIfNeeded on force-systemd-flag-246000: state= err=unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:08.511871   10675 fix.go:117] machineExists: false. err=machine does not exist
	I0721 18:13:08.533558   10675 out.go:177] * docker "force-systemd-flag-246000" container is missing, will recreate.
	I0721 18:13:08.555358   10675 delete.go:124] DEMOLISHING force-systemd-flag-246000 ...
	I0721 18:13:08.555571   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:08.574946   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	W0721 18:13:08.575010   10675 stop.go:83] unable to get state: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:08.575029   10675 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:08.575417   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:08.592710   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:08.592766   10675 delete.go:82] Unable to get host status for force-systemd-flag-246000, assuming it has already been deleted: state: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:08.592854   10675 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-246000
	W0721 18:13:08.609851   10675 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-246000 returned with exit code 1
	I0721 18:13:08.609896   10675 kic.go:371] could not find the container force-systemd-flag-246000 to remove it. will try anyways
	I0721 18:13:08.609984   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:08.627317   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	W0721 18:13:08.627364   10675 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:08.627444   10675 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-246000 /bin/bash -c "sudo init 0"
	W0721 18:13:08.644559   10675 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-246000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0721 18:13:08.644598   10675 oci.go:650] error shutdown force-systemd-flag-246000: docker exec --privileged -t force-systemd-flag-246000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:09.647012   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:09.667192   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:09.667252   10675 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:09.667260   10675 oci.go:664] temporary error: container force-systemd-flag-246000 status is  but expect it to be exited
	I0721 18:13:09.667284   10675 retry.go:31] will retry after 392.562447ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:10.060184   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:10.079195   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:10.079256   10675 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:10.079269   10675 oci.go:664] temporary error: container force-systemd-flag-246000 status is  but expect it to be exited
	I0721 18:13:10.079293   10675 retry.go:31] will retry after 557.215764ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:10.636907   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:10.656699   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:10.656758   10675 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:10.656771   10675 oci.go:664] temporary error: container force-systemd-flag-246000 status is  but expect it to be exited
	I0721 18:13:10.656795   10675 retry.go:31] will retry after 1.671097916s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:12.328454   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:12.348490   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:12.348540   10675 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:12.348553   10675 oci.go:664] temporary error: container force-systemd-flag-246000 status is  but expect it to be exited
	I0721 18:13:12.348581   10675 retry.go:31] will retry after 1.375741414s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:13.726734   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:13.747297   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:13.747345   10675 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:13.747354   10675 oci.go:664] temporary error: container force-systemd-flag-246000 status is  but expect it to be exited
	I0721 18:13:13.747380   10675 retry.go:31] will retry after 1.86954858s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:15.619278   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:15.639415   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:15.639466   10675 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:15.639477   10675 oci.go:664] temporary error: container force-systemd-flag-246000 status is  but expect it to be exited
	I0721 18:13:15.639499   10675 retry.go:31] will retry after 4.355908126s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:19.995871   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:20.016671   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:20.016724   10675 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:20.016734   10675 oci.go:664] temporary error: container force-systemd-flag-246000 status is  but expect it to be exited
	I0721 18:13:20.016757   10675 retry.go:31] will retry after 6.110643056s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:26.128353   10675 cli_runner.go:164] Run: docker container inspect force-systemd-flag-246000 --format={{.State.Status}}
	W0721 18:13:26.148620   10675 cli_runner.go:211] docker container inspect force-systemd-flag-246000 --format={{.State.Status}} returned with exit code 1
	I0721 18:13:26.148677   10675 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:13:26.148690   10675 oci.go:664] temporary error: container force-systemd-flag-246000 status is  but expect it to be exited
	I0721 18:13:26.148730   10675 oci.go:88] couldn't shut down force-systemd-flag-246000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	 
	I0721 18:13:26.148809   10675 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-246000
	I0721 18:13:26.167172   10675 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-246000
	W0721 18:13:26.184124   10675 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-246000 returned with exit code 1
	I0721 18:13:26.184247   10675 cli_runner.go:164] Run: docker network inspect force-systemd-flag-246000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 18:13:26.201575   10675 cli_runner.go:164] Run: docker network rm force-systemd-flag-246000
	I0721 18:13:26.279907   10675 fix.go:124] Sleeping 1 second for extra luck!
	I0721 18:13:27.282066   10675 start.go:125] createHost starting for "" (driver="docker")
	I0721 18:13:27.304177   10675 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0721 18:13:27.304368   10675 start.go:159] libmachine.API.Create for "force-systemd-flag-246000" (driver="docker")
	I0721 18:13:27.304407   10675 client.go:168] LocalClient.Create starting
	I0721 18:13:27.304634   10675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 18:13:27.304744   10675 main.go:141] libmachine: Decoding PEM data...
	I0721 18:13:27.304772   10675 main.go:141] libmachine: Parsing certificate...
	I0721 18:13:27.304861   10675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 18:13:27.304942   10675 main.go:141] libmachine: Decoding PEM data...
	I0721 18:13:27.304957   10675 main.go:141] libmachine: Parsing certificate...
	I0721 18:13:27.305670   10675 cli_runner.go:164] Run: docker network inspect force-systemd-flag-246000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 18:13:27.324449   10675 cli_runner.go:211] docker network inspect force-systemd-flag-246000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 18:13:27.324550   10675 network_create.go:284] running [docker network inspect force-systemd-flag-246000] to gather additional debugging logs...
	I0721 18:13:27.324566   10675 cli_runner.go:164] Run: docker network inspect force-systemd-flag-246000
	W0721 18:13:27.342170   10675 cli_runner.go:211] docker network inspect force-systemd-flag-246000 returned with exit code 1
	I0721 18:13:27.342204   10675 network_create.go:287] error running [docker network inspect force-systemd-flag-246000]: docker network inspect force-systemd-flag-246000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-246000 not found
	I0721 18:13:27.342221   10675 network_create.go:289] output of [docker network inspect force-systemd-flag-246000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-246000 not found
	
	** /stderr **
	I0721 18:13:27.342349   10675 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 18:13:27.362538   10675 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:13:27.364104   10675 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:13:27.365640   10675 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:13:27.366974   10675 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:13:27.368536   10675 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:13:27.368875   10675 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015b3760}
	I0721 18:13:27.368889   10675 network_create.go:124] attempt to create docker network force-systemd-flag-246000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0721 18:13:27.368966   10675 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-246000 force-systemd-flag-246000
	I0721 18:13:27.432678   10675 network_create.go:108] docker network force-systemd-flag-246000 192.168.94.0/24 created
	I0721 18:13:27.432721   10675 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-246000" container
	I0721 18:13:27.432834   10675 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 18:13:27.452354   10675 cli_runner.go:164] Run: docker volume create force-systemd-flag-246000 --label name.minikube.sigs.k8s.io=force-systemd-flag-246000 --label created_by.minikube.sigs.k8s.io=true
	I0721 18:13:27.469737   10675 oci.go:103] Successfully created a docker volume force-systemd-flag-246000
	I0721 18:13:27.469870   10675 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-246000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-246000 --entrypoint /usr/bin/test -v force-systemd-flag-246000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 18:13:27.724718   10675 oci.go:107] Successfully prepared a docker volume force-systemd-flag-246000
	I0721 18:13:27.724783   10675 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 18:13:27.724797   10675 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 18:13:27.724925   10675 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-246000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0721 18:19:27.302799   10675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 18:19:27.302931   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:27.322383   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:27.322510   10675 retry.go:31] will retry after 181.556859ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:27.506433   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:27.526720   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:27.526837   10675 retry.go:31] will retry after 194.420192ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:27.723690   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:27.743845   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:27.743947   10675 retry.go:31] will retry after 774.650649ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:28.521000   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:28.541233   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	W0721 18:19:28.541342   10675 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	
	W0721 18:19:28.541361   10675 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:28.541433   10675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 18:19:28.541493   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:28.559293   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:28.559385   10675 retry.go:31] will retry after 310.756222ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:28.872578   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:28.893219   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:28.893318   10675 retry.go:31] will retry after 292.533563ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:29.188282   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:29.207957   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:29.208074   10675 retry.go:31] will retry after 638.812954ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:29.849347   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:29.869885   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	W0721 18:19:29.869991   10675 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	
	W0721 18:19:29.870019   10675 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:29.870036   10675 start.go:128] duration metric: took 6m2.590946704s to createHost
	I0721 18:19:29.870107   10675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 18:19:29.870171   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:29.887282   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:29.887384   10675 retry.go:31] will retry after 137.780996ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:30.025390   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:30.042359   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:30.042451   10675 retry.go:31] will retry after 306.36047ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:30.350637   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:30.369418   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:30.369514   10675 retry.go:31] will retry after 289.760615ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:30.660214   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:30.679846   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:30.679959   10675 retry.go:31] will retry after 1.041719711s: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:31.724064   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:31.744899   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	W0721 18:19:31.744997   10675 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	
	W0721 18:19:31.745013   10675 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:31.745084   10675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 18:19:31.745139   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:31.763243   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:31.763334   10675 retry.go:31] will retry after 357.66035ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:32.122467   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:32.143061   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:32.143178   10675 retry.go:31] will retry after 299.671321ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:32.444511   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:32.463083   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	I0721 18:19:32.463182   10675 retry.go:31] will retry after 773.603371ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:33.237157   10675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000
	W0721 18:19:33.256949   10675 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000 returned with exit code 1
	W0721 18:19:33.257054   10675 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	
	W0721 18:19:33.257073   10675 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-246000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-246000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	I0721 18:19:33.257088   10675 fix.go:56] duration metric: took 6m24.769789191s for fixHost
	I0721 18:19:33.257096   10675 start.go:83] releasing machines lock for "force-systemd-flag-246000", held for 6m24.769838503s
	W0721 18:19:33.257174   10675 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-246000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-246000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0721 18:19:33.298795   10675 out.go:177] 
	W0721 18:19:33.320484   10675 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0721 18:19:33.320541   10675 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0721 18:19:33.320614   10675 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0721 18:19:33.341562   10675 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-246000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-246000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-246000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (164.369839ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-246000 host status: state: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-246000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-21 18:19:33.561203 -0700 PDT m=+6932.301688357
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-246000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-246000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-246000",
	        "Id": "7ba7f4ed994eb6e8e76d0cbf64de27190f67dd5aa1012ca59302e5b2a12b52bd",
	        "Created": "2024-07-22T01:13:27.384412108Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-246000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-246000 -n force-systemd-flag-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-246000 -n force-systemd-flag-246000: exit status 7 (71.522338ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 18:19:33.652647   10959 status.go:249] status error: host: state: unknown state "force-systemd-flag-246000": docker container inspect force-systemd-flag-246000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-246000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-246000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-246000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-246000
--- FAIL: TestForceSystemdFlag (754.54s)

                                                
                                    
x
+
TestForceSystemdEnv (756.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-268000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0721 17:57:17.751857    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 17:59:14.695249    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 17:59:54.459915    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 18:02:57.512279    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 18:04:14.782958    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 18:04:54.547962    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-268000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m35.773885551s)

                                                
                                                
-- stdout --
	* [force-systemd-env-268000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-268000" primary control-plane node in "force-systemd-env-268000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-268000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:54:59.682422   10191 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:54:59.682678   10191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:54:59.682684   10191 out.go:304] Setting ErrFile to fd 2...
	I0721 17:54:59.682688   10191 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:54:59.682856   10191 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:54:59.684365   10191 out.go:298] Setting JSON to false
	I0721 17:54:59.706746   10191 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6869,"bootTime":1721602830,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0721 17:54:59.706844   10191 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:54:59.729073   10191 out.go:177] * [force-systemd-env-268000] minikube v1.33.1 on Darwin 14.5
	I0721 17:54:59.771246   10191 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:54:59.771290   10191 notify.go:220] Checking for updates...
	I0721 17:54:59.814084   10191 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	I0721 17:54:59.835218   10191 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0721 17:54:59.855998   10191 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:54:59.877227   10191 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	I0721 17:54:59.898333   10191 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0721 17:54:59.919856   10191 config.go:182] Loaded profile config "offline-docker-989000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:54:59.920000   10191 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:54:59.944715   10191 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0721 17:54:59.945021   10191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 17:55:00.029497   10191 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:false NGoroutines:182 SystemTime:2024-07-22 00:55:00.021035897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13
-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-
desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-pl
ugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 17:55:00.051347   10191 out.go:177] * Using the docker driver based on user configuration
	I0721 17:55:00.072201   10191 start.go:297] selected driver: docker
	I0721 17:55:00.072212   10191 start.go:901] validating driver "docker" against <nil>
	I0721 17:55:00.072219   10191 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:55:00.075571   10191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 17:55:00.155093   10191 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:false NGoroutines:182 SystemTime:2024-07-22 00:55:00.147187895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13
-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-
desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-pl
ugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 17:55:00.155257   10191 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:55:00.155441   10191 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 17:55:00.177064   10191 out.go:177] * Using Docker Desktop driver with root privileges
	I0721 17:55:00.198272   10191 cni.go:84] Creating CNI manager for ""
	I0721 17:55:00.198298   10191 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0721 17:55:00.198314   10191 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0721 17:55:00.198368   10191 start.go:340] cluster config:
	{Name:force-systemd-env-268000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-268000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:55:00.219276   10191 out.go:177] * Starting "force-systemd-env-268000" primary control-plane node in "force-systemd-env-268000" cluster
	I0721 17:55:00.261448   10191 cache.go:121] Beginning downloading kic base image for docker with docker
	I0721 17:55:00.283410   10191 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0721 17:55:00.325319   10191 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:55:00.325371   10191 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0721 17:55:00.325390   10191 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 17:55:00.325411   10191 cache.go:56] Caching tarball of preloaded images
	I0721 17:55:00.325687   10191 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 17:55:00.325707   10191 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:55:00.326640   10191 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/force-systemd-env-268000/config.json ...
	I0721 17:55:00.326802   10191 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/force-systemd-env-268000/config.json: {Name:mkfcf324dba72fbb2d123c71645126c6f6279ef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0721 17:55:00.351715   10191 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0721 17:55:00.351728   10191 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0721 17:55:00.351852   10191 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0721 17:55:00.351875   10191 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0721 17:55:00.351882   10191 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0721 17:55:00.351904   10191 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0721 17:55:00.351912   10191 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0721 17:55:00.355047   10191 image.go:273] response: 
	I0721 17:55:00.994785   10191 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0721 17:55:00.994831   10191 cache.go:194] Successfully downloaded all kic artifacts
	I0721 17:55:00.994895   10191 start.go:360] acquireMachinesLock for force-systemd-env-268000: {Name:mk37062c111fdcae3adec151d9f4a8d036f1edf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:55:00.995066   10191 start.go:364] duration metric: took 158.578µs to acquireMachinesLock for "force-systemd-env-268000"
	I0721 17:55:00.995093   10191 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-268000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-268000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:55:00.995159   10191 start.go:125] createHost starting for "" (driver="docker")
	I0721 17:55:01.037419   10191 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0721 17:55:01.037626   10191 start.go:159] libmachine.API.Create for "force-systemd-env-268000" (driver="docker")
	I0721 17:55:01.037662   10191 client.go:168] LocalClient.Create starting
	I0721 17:55:01.037761   10191 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 17:55:01.037815   10191 main.go:141] libmachine: Decoding PEM data...
	I0721 17:55:01.037832   10191 main.go:141] libmachine: Parsing certificate...
	I0721 17:55:01.037884   10191 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 17:55:01.037921   10191 main.go:141] libmachine: Decoding PEM data...
	I0721 17:55:01.037929   10191 main.go:141] libmachine: Parsing certificate...
	I0721 17:55:01.038606   10191 cli_runner.go:164] Run: docker network inspect force-systemd-env-268000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 17:55:01.056036   10191 cli_runner.go:211] docker network inspect force-systemd-env-268000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 17:55:01.056154   10191 network_create.go:284] running [docker network inspect force-systemd-env-268000] to gather additional debugging logs...
	I0721 17:55:01.056173   10191 cli_runner.go:164] Run: docker network inspect force-systemd-env-268000
	W0721 17:55:01.073291   10191 cli_runner.go:211] docker network inspect force-systemd-env-268000 returned with exit code 1
	I0721 17:55:01.073332   10191 network_create.go:287] error running [docker network inspect force-systemd-env-268000]: docker network inspect force-systemd-env-268000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-268000 not found
	I0721 17:55:01.073347   10191 network_create.go:289] output of [docker network inspect force-systemd-env-268000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-268000 not found
	
	** /stderr **
	I0721 17:55:01.073473   10191 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 17:55:01.092323   10191 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:55:01.093945   10191 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:55:01.095591   10191 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:55:01.097199   10191 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:55:01.097523   10191 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00161ce30}
	I0721 17:55:01.097538   10191 network_create.go:124] attempt to create docker network force-systemd-env-268000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0721 17:55:01.097607   10191 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-268000 force-systemd-env-268000
	I0721 17:55:01.160933   10191 network_create.go:108] docker network force-systemd-env-268000 192.168.85.0/24 created
	I0721 17:55:01.160977   10191 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-268000" container
	I0721 17:55:01.161093   10191 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 17:55:01.180326   10191 cli_runner.go:164] Run: docker volume create force-systemd-env-268000 --label name.minikube.sigs.k8s.io=force-systemd-env-268000 --label created_by.minikube.sigs.k8s.io=true
	I0721 17:55:01.198704   10191 oci.go:103] Successfully created a docker volume force-systemd-env-268000
	I0721 17:55:01.198823   10191 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-268000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-268000 --entrypoint /usr/bin/test -v force-systemd-env-268000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 17:55:01.645588   10191 oci.go:107] Successfully prepared a docker volume force-systemd-env-268000
	I0721 17:55:01.645630   10191 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:55:01.645645   10191 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 17:55:01.645739   10191 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-268000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0721 18:01:01.035984   10191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 18:01:01.036126   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:01:01.056012   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:01:01.056139   10191 retry.go:31] will retry after 305.524856ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:01.364135   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:01:01.384215   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:01:01.384331   10191 retry.go:31] will retry after 498.818188ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:01.884683   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:01:01.904660   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:01:01.904751   10191 retry.go:31] will retry after 301.467478ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:02.208656   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:01:02.228375   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	W0721 18:01:02.228477   10191 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	
	W0721 18:01:02.228512   10191 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:02.228571   10191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 18:01:02.228632   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:01:02.246122   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:01:02.246227   10191 retry.go:31] will retry after 165.92102ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:02.412504   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:01:02.432026   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:01:02.432118   10191 retry.go:31] will retry after 411.29408ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:02.843732   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:01:02.862272   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:01:02.862365   10191 retry.go:31] will retry after 821.075357ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:03.685179   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:01:03.704595   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	W0721 18:01:03.704716   10191 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	
	W0721 18:01:03.704733   10191 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:03.704753   10191 start.go:128] duration metric: took 6m2.712559047s to createHost
	I0721 18:01:03.704762   10191 start.go:83] releasing machines lock for "force-systemd-env-268000", held for 6m2.712669272s
	W0721 18:01:03.704776   10191 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0721 18:01:03.705230   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:03.723440   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	I0721 18:01:03.723500   10191 delete.go:82] Unable to get host status for force-systemd-env-268000, assuming it has already been deleted: state: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	W0721 18:01:03.723604   10191 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0721 18:01:03.723614   10191 start.go:729] Will try again in 5 seconds ...
	I0721 18:01:08.724879   10191 start.go:360] acquireMachinesLock for force-systemd-env-268000: {Name:mk37062c111fdcae3adec151d9f4a8d036f1edf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 18:01:08.725892   10191 start.go:364] duration metric: took 877.157µs to acquireMachinesLock for "force-systemd-env-268000"
	I0721 18:01:08.725958   10191 start.go:96] Skipping create...Using existing machine configuration
	I0721 18:01:08.725978   10191 fix.go:54] fixHost starting: 
	I0721 18:01:08.726520   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:08.745547   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	I0721 18:01:08.745598   10191 fix.go:112] recreateIfNeeded on force-systemd-env-268000: state= err=unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:08.745617   10191 fix.go:117] machineExists: false. err=machine does not exist
	I0721 18:01:08.789074   10191 out.go:177] * docker "force-systemd-env-268000" container is missing, will recreate.
	I0721 18:01:08.809994   10191 delete.go:124] DEMOLISHING force-systemd-env-268000 ...
	I0721 18:01:08.810227   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:08.829071   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	W0721 18:01:08.829131   10191 stop.go:83] unable to get state: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:08.829145   10191 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:08.829531   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:08.846607   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	I0721 18:01:08.846656   10191 delete.go:82] Unable to get host status for force-systemd-env-268000, assuming it has already been deleted: state: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:08.846748   10191 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-268000
	W0721 18:01:08.864061   10191 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-268000 returned with exit code 1
	I0721 18:01:08.864109   10191 kic.go:371] could not find the container force-systemd-env-268000 to remove it. will try anyways
	I0721 18:01:08.864201   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:08.881223   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	W0721 18:01:08.881278   10191 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:08.881365   10191 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-268000 /bin/bash -c "sudo init 0"
	W0721 18:01:08.898330   10191 cli_runner.go:211] docker exec --privileged -t force-systemd-env-268000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0721 18:01:08.898368   10191 oci.go:650] error shutdown force-systemd-env-268000: docker exec --privileged -t force-systemd-env-268000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:09.898637   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:09.916230   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	I0721 18:01:09.916283   10191 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:09.916301   10191 oci.go:664] temporary error: container force-systemd-env-268000 status is  but expect it to be exited
	I0721 18:01:09.916322   10191 retry.go:31] will retry after 292.918373ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:10.211638   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:10.231020   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	I0721 18:01:10.231072   10191 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:10.231081   10191 oci.go:664] temporary error: container force-systemd-env-268000 status is  but expect it to be exited
	I0721 18:01:10.231107   10191 retry.go:31] will retry after 623.946171ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:10.857454   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:10.878679   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	I0721 18:01:10.878733   10191 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:10.878743   10191 oci.go:664] temporary error: container force-systemd-env-268000 status is  but expect it to be exited
	I0721 18:01:10.878769   10191 retry.go:31] will retry after 1.502341229s: couldn't verify container is exited. %v: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:12.381925   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:12.401572   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	I0721 18:01:12.401621   10191 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:12.401634   10191 oci.go:664] temporary error: container force-systemd-env-268000 status is  but expect it to be exited
	I0721 18:01:12.401659   10191 retry.go:31] will retry after 1.707232152s: couldn't verify container is exited. %v: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:14.109405   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:14.129038   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	I0721 18:01:14.129090   10191 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:14.129102   10191 oci.go:664] temporary error: container force-systemd-env-268000 status is  but expect it to be exited
	I0721 18:01:14.129127   10191 retry.go:31] will retry after 2.667552165s: couldn't verify container is exited. %v: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:16.798884   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:16.819730   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	I0721 18:01:16.819785   10191 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:16.819796   10191 oci.go:664] temporary error: container force-systemd-env-268000 status is  but expect it to be exited
	I0721 18:01:16.819818   10191 retry.go:31] will retry after 5.244661196s: couldn't verify container is exited. %v: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:22.066413   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:22.087401   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	I0721 18:01:22.087450   10191 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:22.087463   10191 oci.go:664] temporary error: container force-systemd-env-268000 status is  but expect it to be exited
	I0721 18:01:22.087489   10191 retry.go:31] will retry after 5.689833761s: couldn't verify container is exited. %v: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:27.779662   10191 cli_runner.go:164] Run: docker container inspect force-systemd-env-268000 --format={{.State.Status}}
	W0721 18:01:27.799721   10191 cli_runner.go:211] docker container inspect force-systemd-env-268000 --format={{.State.Status}} returned with exit code 1
	I0721 18:01:27.799772   10191 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:01:27.799780   10191 oci.go:664] temporary error: container force-systemd-env-268000 status is  but expect it to be exited
	I0721 18:01:27.799813   10191 oci.go:88] couldn't shut down force-systemd-env-268000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	 
	I0721 18:01:27.799898   10191 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-268000
	I0721 18:01:27.818187   10191 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-268000
	W0721 18:01:27.835751   10191 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-268000 returned with exit code 1
	I0721 18:01:27.835874   10191 cli_runner.go:164] Run: docker network inspect force-systemd-env-268000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 18:01:27.853704   10191 cli_runner.go:164] Run: docker network rm force-systemd-env-268000
	I0721 18:01:27.935949   10191 fix.go:124] Sleeping 1 second for extra luck!
	I0721 18:01:28.938113   10191 start.go:125] createHost starting for "" (driver="docker")
	I0721 18:01:28.961533   10191 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0721 18:01:28.961739   10191 start.go:159] libmachine.API.Create for "force-systemd-env-268000" (driver="docker")
	I0721 18:01:28.961768   10191 client.go:168] LocalClient.Create starting
	I0721 18:01:28.961984   10191 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 18:01:28.962079   10191 main.go:141] libmachine: Decoding PEM data...
	I0721 18:01:28.962104   10191 main.go:141] libmachine: Parsing certificate...
	I0721 18:01:28.962201   10191 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 18:01:28.962279   10191 main.go:141] libmachine: Decoding PEM data...
	I0721 18:01:28.962295   10191 main.go:141] libmachine: Parsing certificate...
	I0721 18:01:28.962982   10191 cli_runner.go:164] Run: docker network inspect force-systemd-env-268000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 18:01:28.981730   10191 cli_runner.go:211] docker network inspect force-systemd-env-268000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 18:01:28.981845   10191 network_create.go:284] running [docker network inspect force-systemd-env-268000] to gather additional debugging logs...
	I0721 18:01:28.981860   10191 cli_runner.go:164] Run: docker network inspect force-systemd-env-268000
	W0721 18:01:28.999395   10191 cli_runner.go:211] docker network inspect force-systemd-env-268000 returned with exit code 1
	I0721 18:01:28.999428   10191 network_create.go:287] error running [docker network inspect force-systemd-env-268000]: docker network inspect force-systemd-env-268000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-268000 not found
	I0721 18:01:28.999441   10191 network_create.go:289] output of [docker network inspect force-systemd-env-268000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-268000 not found
	
	** /stderr **
	I0721 18:01:28.999571   10191 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 18:01:29.018699   10191 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:01:29.020276   10191 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:01:29.021701   10191 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:01:29.023289   10191 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:01:29.024803   10191 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:01:29.026494   10191 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 18:01:29.026914   10191 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00161dae0}
	I0721 18:01:29.026945   10191 network_create.go:124] attempt to create docker network force-systemd-env-268000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0721 18:01:29.027039   10191 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-268000 force-systemd-env-268000
	I0721 18:01:29.090852   10191 network_create.go:108] docker network force-systemd-env-268000 192.168.103.0/24 created
	I0721 18:01:29.090901   10191 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-env-268000" container
	I0721 18:01:29.091012   10191 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 18:01:29.110413   10191 cli_runner.go:164] Run: docker volume create force-systemd-env-268000 --label name.minikube.sigs.k8s.io=force-systemd-env-268000 --label created_by.minikube.sigs.k8s.io=true
	I0721 18:01:29.127633   10191 oci.go:103] Successfully created a docker volume force-systemd-env-268000
	I0721 18:01:29.127761   10191 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-268000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-268000 --entrypoint /usr/bin/test -v force-systemd-env-268000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 18:01:29.386116   10191 oci.go:107] Successfully prepared a docker volume force-systemd-env-268000
	I0721 18:01:29.386151   10191 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 18:01:29.386164   10191 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 18:01:29.386288   10191 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-268000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0721 18:07:29.050291   10191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 18:07:29.050417   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:29.071111   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:29.071227   10191 retry.go:31] will retry after 253.074618ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:29.326715   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:29.347117   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:29.347256   10191 retry.go:31] will retry after 539.874299ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:29.887957   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:29.907753   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:29.907850   10191 retry.go:31] will retry after 679.644838ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:30.588647   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:30.608934   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	W0721 18:07:30.609037   10191 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	
	W0721 18:07:30.609055   10191 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:30.609125   10191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 18:07:30.609186   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:30.627124   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:30.627217   10191 retry.go:31] will retry after 136.013296ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:30.765608   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:30.784360   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:30.784478   10191 retry.go:31] will retry after 345.200475ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:31.130509   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:31.151576   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:31.151693   10191 retry.go:31] will retry after 761.371241ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:31.915134   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:31.934076   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	W0721 18:07:31.934183   10191 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	
	W0721 18:07:31.934201   10191 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:31.934213   10191 start.go:128] duration metric: took 6m2.909951686s to createHost
	I0721 18:07:31.934293   10191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 18:07:31.934357   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:32.002944   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:32.003038   10191 retry.go:31] will retry after 198.239316ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:32.202847   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:32.223214   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:32.223313   10191 retry.go:31] will retry after 332.508923ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:32.556156   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:32.575656   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:32.575761   10191 retry.go:31] will retry after 292.074023ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:32.870188   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:32.888942   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:32.889045   10191 retry.go:31] will retry after 611.011972ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:33.502463   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:33.522420   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	W0721 18:07:33.522517   10191 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	
	W0721 18:07:33.522532   10191 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:33.522602   10191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 18:07:33.522655   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:33.540193   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:33.540288   10191 retry.go:31] will retry after 179.411823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:33.721275   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:33.740925   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:33.741019   10191 retry.go:31] will retry after 227.102316ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:33.970456   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:33.989785   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:33.989886   10191 retry.go:31] will retry after 359.808244ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:34.351121   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:34.371996   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	I0721 18:07:34.372096   10191 retry.go:31] will retry after 920.390751ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:35.292796   10191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000
	W0721 18:07:35.312387   10191 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000 returned with exit code 1
	W0721 18:07:35.312485   10191 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	
	W0721 18:07:35.312501   10191 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-268000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-268000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	I0721 18:07:35.312514   10191 fix.go:56] duration metric: took 6m26.50063222s for fixHost
	I0721 18:07:35.312521   10191 start.go:83] releasing machines lock for "force-systemd-env-268000", held for 6m26.500680652s
	W0721 18:07:35.312602   10191 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-268000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-268000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0721 18:07:35.356228   10191 out.go:177] 
	W0721 18:07:35.377360   10191 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0721 18:07:35.377420   10191 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0721 18:07:35.377479   10191 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0721 18:07:35.420206   10191 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-268000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-268000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-268000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (160.480871ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-268000 host status: state: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-268000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-21 18:07:35.657849 -0700 PDT m=+6214.392159554
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-268000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-268000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-268000",
	        "Id": "b4ce448a6af3749d06377fe99dacebbb89881b9dbc292ed80db37b298be754ce",
	        "Created": "2024-07-22T01:01:29.042695659Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-268000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-268000 -n force-systemd-env-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-268000 -n force-systemd-env-268000: exit status 7 (74.008615ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 18:07:35.752263   10738 status.go:249] status error: host: state: unknown state "force-systemd-env-268000": docker container inspect force-systemd-env-268000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-268000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-268000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-268000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-268000
--- FAIL: TestForceSystemdEnv (756.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (892.78s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-349000 ssh -- ls /minikube-host
E0721 16:54:14.552703    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:54:54.316735    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:56:17.363912    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:59:14.550809    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:59:54.315798    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 17:04:14.617476    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 17:04:54.381763    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-1-349000 ssh -- ls /minikube-host: signal: killed (14m52.516715875s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-1-349000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-349000
helpers_test.go:235: (dbg) docker inspect mount-start-1-349000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ef23ce05cd863d27f8121e8caa01aa9d317b9e99e32a365217ee0df17209dd8a",
	        "Created": "2024-07-21T23:51:03.394781209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 127383,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-21T23:51:03.527763017Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7bda27423b38cbebec7632cdf15a8fcb063ff209d17af249e6b3f1fbdb5fa681",
	        "ResolvConfPath": "/var/lib/docker/containers/ef23ce05cd863d27f8121e8caa01aa9d317b9e99e32a365217ee0df17209dd8a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ef23ce05cd863d27f8121e8caa01aa9d317b9e99e32a365217ee0df17209dd8a/hostname",
	        "HostsPath": "/var/lib/docker/containers/ef23ce05cd863d27f8121e8caa01aa9d317b9e99e32a365217ee0df17209dd8a/hosts",
	        "LogPath": "/var/lib/docker/containers/ef23ce05cd863d27f8121e8caa01aa9d317b9e99e32a365217ee0df17209dd8a/ef23ce05cd863d27f8121e8caa01aa9d317b9e99e32a365217ee0df17209dd8a-json.log",
	        "Name": "/mount-start-1-349000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/Users:/minikube-host",
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-1-349000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-1-349000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2529a4de381e27046abadc20cbaca92ec61f83deebdc7df436e61a232ea400d5-init/diff:/var/lib/docker/overlay2/cb01244efac4c6958801c30c80a394318929a290d876b0f3307332544ab59d29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2529a4de381e27046abadc20cbaca92ec61f83deebdc7df436e61a232ea400d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2529a4de381e27046abadc20cbaca92ec61f83deebdc7df436e61a232ea400d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2529a4de381e27046abadc20cbaca92ec61f83deebdc7df436e61a232ea400d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-1-349000",
	                "Source": "/var/lib/docker/volumes/mount-start-1-349000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-1-349000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-1-349000",
	                "name.minikube.sigs.k8s.io": "mount-start-1-349000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a2945a92f283e83a919aa312982f751015b865d3f7f60fe8e82711a703a4aa5a",
	            "SandboxKey": "/var/run/docker/netns/a2945a92f283",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51759"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51760"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51761"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51762"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51763"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-1-349000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null,
	                    "NetworkID": "058f74582f976282a27cc6d8fcfddf569f38a4393e7f17db068be52808225295",
	                    "EndpointID": "e8daa14a3f13e4a216ca8fd76f923d8d852e5686f04db31e2e0c3f86f6272d64",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "mount-start-1-349000",
	                        "ef23ce05cd86"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-349000 -n mount-start-1-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-349000 -n mount-start-1-349000: exit status 6 (241.423728ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:06:01.557546    7596 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-1-349000" does not appear in /Users/jenkins/minikube-integration/19312-1112/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-1-349000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountFirst (892.78s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (755.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-357000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0721 17:07:17.667287    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 17:09:14.617504    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 17:09:54.381417    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 17:12:57.429495    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 17:14:14.616639    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 17:14:54.380848    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 17:19:14.616612    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-357000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m35.001770055s)

                                                
                                                
-- stdout --
	* [multinode-357000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-357000" primary control-plane node in "multinode-357000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-357000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:07:09.343350    7653 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:07:09.343645    7653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:07:09.343651    7653 out.go:304] Setting ErrFile to fd 2...
	I0721 17:07:09.343655    7653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:07:09.343834    7653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:07:09.345342    7653 out.go:298] Setting JSON to false
	I0721 17:07:09.368272    7653 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3999,"bootTime":1721602830,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0721 17:07:09.368361    7653 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:07:09.390183    7653 out.go:177] * [multinode-357000] minikube v1.33.1 on Darwin 14.5
	I0721 17:07:09.433131    7653 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:07:09.433201    7653 notify.go:220] Checking for updates...
	I0721 17:07:09.475814    7653 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	I0721 17:07:09.497168    7653 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0721 17:07:09.518265    7653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:07:09.539809    7653 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	I0721 17:07:09.561141    7653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:07:09.582653    7653 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:07:09.606900    7653 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0721 17:07:09.607074    7653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 17:07:09.689930    7653 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:74 SystemTime:2024-07-22 00:07:09.680806439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 17:07:09.732153    7653 out.go:177] * Using the docker driver based on user configuration
	I0721 17:07:09.753178    7653 start.go:297] selected driver: docker
	I0721 17:07:09.753209    7653 start.go:901] validating driver "docker" against <nil>
	I0721 17:07:09.753223    7653 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:07:09.758165    7653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 17:07:09.838786    7653 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:74 SystemTime:2024-07-22 00:07:09.830465329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 17:07:09.838967    7653 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 17:07:09.839174    7653 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:07:09.861100    7653 out.go:177] * Using Docker Desktop driver with root privileges
	I0721 17:07:09.883171    7653 cni.go:84] Creating CNI manager for ""
	I0721 17:07:09.883204    7653 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0721 17:07:09.883230    7653 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0721 17:07:09.883345    7653 start.go:340] cluster config:
	{Name:multinode-357000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-357000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:07:09.905121    7653 out.go:177] * Starting "multinode-357000" primary control-plane node in "multinode-357000" cluster
	I0721 17:07:09.947052    7653 cache.go:121] Beginning downloading kic base image for docker with docker
	I0721 17:07:09.969985    7653 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0721 17:07:10.012183    7653 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:07:10.012236    7653 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0721 17:07:10.012260    7653 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 17:07:10.012287    7653 cache.go:56] Caching tarball of preloaded images
	I0721 17:07:10.012524    7653 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 17:07:10.012544    7653 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:07:10.014108    7653 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/multinode-357000/config.json ...
	I0721 17:07:10.014237    7653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/multinode-357000/config.json: {Name:mk6498a994550a9a4c0463ce73d279405d248456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0721 17:07:10.038447    7653 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0721 17:07:10.038461    7653 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0721 17:07:10.038575    7653 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0721 17:07:10.038593    7653 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0721 17:07:10.038599    7653 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0721 17:07:10.038608    7653 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0721 17:07:10.038613    7653 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0721 17:07:10.041641    7653 image.go:273] response: 
	I0721 17:07:10.678599    7653 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0721 17:07:10.678652    7653 cache.go:194] Successfully downloaded all kic artifacts
	I0721 17:07:10.678701    7653 start.go:360] acquireMachinesLock for multinode-357000: {Name:mkcac3380918714b218acb546d0dc62757d251e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:07:10.678873    7653 start.go:364] duration metric: took 158.585µs to acquireMachinesLock for "multinode-357000"
	I0721 17:07:10.678907    7653 start.go:93] Provisioning new machine with config: &{Name:multinode-357000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-357000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0721 17:07:10.678978    7653 start.go:125] createHost starting for "" (driver="docker")
	I0721 17:07:10.721550    7653 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0721 17:07:10.721772    7653 start.go:159] libmachine.API.Create for "multinode-357000" (driver="docker")
	I0721 17:07:10.721799    7653 client.go:168] LocalClient.Create starting
	I0721 17:07:10.721935    7653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 17:07:10.721989    7653 main.go:141] libmachine: Decoding PEM data...
	I0721 17:07:10.722009    7653 main.go:141] libmachine: Parsing certificate...
	I0721 17:07:10.722056    7653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 17:07:10.722100    7653 main.go:141] libmachine: Decoding PEM data...
	I0721 17:07:10.722109    7653 main.go:141] libmachine: Parsing certificate...
	I0721 17:07:10.722609    7653 cli_runner.go:164] Run: docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 17:07:10.740059    7653 cli_runner.go:211] docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 17:07:10.740158    7653 network_create.go:284] running [docker network inspect multinode-357000] to gather additional debugging logs...
	I0721 17:07:10.740181    7653 cli_runner.go:164] Run: docker network inspect multinode-357000
	W0721 17:07:10.757554    7653 cli_runner.go:211] docker network inspect multinode-357000 returned with exit code 1
	I0721 17:07:10.757580    7653 network_create.go:287] error running [docker network inspect multinode-357000]: docker network inspect multinode-357000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-357000 not found
	I0721 17:07:10.757601    7653 network_create.go:289] output of [docker network inspect multinode-357000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-357000 not found
	
	** /stderr **
	I0721 17:07:10.757733    7653 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 17:07:10.777261    7653 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:07:10.778855    7653 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:07:10.779229    7653 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015c8360}
	I0721 17:07:10.779246    7653 network_create.go:124] attempt to create docker network multinode-357000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0721 17:07:10.779324    7653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000
	W0721 17:07:10.797170    7653 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000 returned with exit code 1
	W0721 17:07:10.797219    7653 network_create.go:149] failed to create docker network multinode-357000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0721 17:07:10.797247    7653 network_create.go:116] failed to create docker network multinode-357000 192.168.67.0/24, will retry: subnet is taken
	I0721 17:07:10.798635    7653 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:07:10.799008    7653 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001684230}
	I0721 17:07:10.799027    7653 network_create.go:124] attempt to create docker network multinode-357000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0721 17:07:10.799094    7653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000
	I0721 17:07:10.864081    7653 network_create.go:108] docker network multinode-357000 192.168.76.0/24 created
	I0721 17:07:10.864128    7653 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-357000" container
	I0721 17:07:10.864239    7653 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 17:07:10.882063    7653 cli_runner.go:164] Run: docker volume create multinode-357000 --label name.minikube.sigs.k8s.io=multinode-357000 --label created_by.minikube.sigs.k8s.io=true
	I0721 17:07:10.900144    7653 oci.go:103] Successfully created a docker volume multinode-357000
	I0721 17:07:10.900286    7653 cli_runner.go:164] Run: docker run --rm --name multinode-357000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-357000 --entrypoint /usr/bin/test -v multinode-357000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 17:07:11.326592    7653 oci.go:107] Successfully prepared a docker volume multinode-357000
	I0721 17:07:11.326638    7653 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:07:11.326658    7653 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 17:07:11.326811    7653 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-357000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0721 17:13:10.722239    7653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 17:13:10.722387    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:13:10.742057    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:13:10.742183    7653 retry.go:31] will retry after 218.46939ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:10.961996    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:13:10.981866    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:13:10.981959    7653 retry.go:31] will retry after 326.096313ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:11.310518    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:13:11.330945    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:13:11.331038    7653 retry.go:31] will retry after 424.507368ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:11.756728    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:13:11.818889    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:13:11.819039    7653 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:13:11.819073    7653 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:11.819149    7653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 17:13:11.819227    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:13:11.842594    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:13:11.842691    7653 retry.go:31] will retry after 191.217948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:12.035104    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:13:12.055390    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:13:12.055480    7653 retry.go:31] will retry after 258.097425ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:12.313918    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:13:12.333305    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:13:12.333415    7653 retry.go:31] will retry after 376.578284ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:12.711896    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:13:12.731256    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:13:12.731356    7653 retry.go:31] will retry after 589.283341ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:13.321148    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:13:13.339043    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:13:13.339146    7653 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:13:13.339162    7653 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:13.339192    7653 start.go:128] duration metric: took 6m2.660168295s to createHost
	I0721 17:13:13.339199    7653 start.go:83] releasing machines lock for "multinode-357000", held for 6m2.660294764s
	W0721 17:13:13.339213    7653 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0721 17:13:13.339648    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:13.357749    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:13:13.357811    7653 delete.go:82] Unable to get host status for multinode-357000, assuming it has already been deleted: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	W0721 17:13:13.357906    7653 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0721 17:13:13.357915    7653 start.go:729] Will try again in 5 seconds ...
	I0721 17:13:18.360869    7653 start.go:360] acquireMachinesLock for multinode-357000: {Name:mkcac3380918714b218acb546d0dc62757d251e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:13:18.361243    7653 start.go:364] duration metric: took 227.692µs to acquireMachinesLock for "multinode-357000"
	I0721 17:13:18.361310    7653 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:13:18.361335    7653 fix.go:54] fixHost starting: 
	I0721 17:13:18.361813    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:18.381294    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:13:18.381357    7653 fix.go:112] recreateIfNeeded on multinode-357000: state= err=unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:18.381376    7653 fix.go:117] machineExists: false. err=machine does not exist
	I0721 17:13:18.403443    7653 out.go:177] * docker "multinode-357000" container is missing, will recreate.
	I0721 17:13:18.445725    7653 delete.go:124] DEMOLISHING multinode-357000 ...
	I0721 17:13:18.445906    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:18.464183    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	W0721 17:13:18.464226    7653 stop.go:83] unable to get state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:18.464247    7653 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:18.464623    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:18.481610    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:13:18.481673    7653 delete.go:82] Unable to get host status for multinode-357000, assuming it has already been deleted: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:18.481772    7653 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-357000
	W0721 17:13:18.498874    7653 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-357000 returned with exit code 1
	I0721 17:13:18.498908    7653 kic.go:371] could not find the container multinode-357000 to remove it. will try anyways
	I0721 17:13:18.498996    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:18.515971    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	W0721 17:13:18.516018    7653 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:18.516113    7653 cli_runner.go:164] Run: docker exec --privileged -t multinode-357000 /bin/bash -c "sudo init 0"
	W0721 17:13:18.533691    7653 cli_runner.go:211] docker exec --privileged -t multinode-357000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0721 17:13:18.533723    7653 oci.go:650] error shutdown multinode-357000: docker exec --privileged -t multinode-357000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:19.536205    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:19.556391    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:13:19.556433    7653 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:19.556445    7653 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:13:19.556471    7653 retry.go:31] will retry after 646.710636ms: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:20.204371    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:20.223575    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:13:20.223629    7653 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:20.223640    7653 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:13:20.223663    7653 retry.go:31] will retry after 422.471348ms: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:20.648331    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:20.668514    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:13:20.668568    7653 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:20.668580    7653 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:13:20.668603    7653 retry.go:31] will retry after 1.140179919s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:21.811140    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:21.830781    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:13:21.830828    7653 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:21.830841    7653 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:13:21.830869    7653 retry.go:31] will retry after 2.456648216s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:24.288751    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:24.308312    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:13:24.308357    7653 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:24.308367    7653 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:13:24.308394    7653 retry.go:31] will retry after 3.08604483s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:27.396080    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:27.415475    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:13:27.415534    7653 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:27.415547    7653 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:13:27.415570    7653 retry.go:31] will retry after 2.851952637s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:30.270000    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:30.289252    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:13:30.289302    7653 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:30.289312    7653 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:13:30.289341    7653 retry.go:31] will retry after 6.161786524s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:36.452496    7653 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:13:36.472484    7653 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:13:36.472526    7653 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:13:36.472537    7653 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:13:36.472570    7653 oci.go:88] couldn't shut down multinode-357000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	 
	I0721 17:13:36.472653    7653 cli_runner.go:164] Run: docker rm -f -v multinode-357000
	I0721 17:13:36.490569    7653 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-357000
	W0721 17:13:36.508280    7653 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-357000 returned with exit code 1
	I0721 17:13:36.508398    7653 cli_runner.go:164] Run: docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 17:13:36.525839    7653 cli_runner.go:164] Run: docker network rm multinode-357000
	I0721 17:13:36.601641    7653 fix.go:124] Sleeping 1 second for extra luck!
	I0721 17:13:37.601964    7653 start.go:125] createHost starting for "" (driver="docker")
	I0721 17:13:37.622895    7653 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0721 17:13:37.623075    7653 start.go:159] libmachine.API.Create for "multinode-357000" (driver="docker")
	I0721 17:13:37.623102    7653 client.go:168] LocalClient.Create starting
	I0721 17:13:37.623365    7653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 17:13:37.623475    7653 main.go:141] libmachine: Decoding PEM data...
	I0721 17:13:37.623500    7653 main.go:141] libmachine: Parsing certificate...
	I0721 17:13:37.623581    7653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 17:13:37.623659    7653 main.go:141] libmachine: Decoding PEM data...
	I0721 17:13:37.623673    7653 main.go:141] libmachine: Parsing certificate...
	I0721 17:13:37.644432    7653 cli_runner.go:164] Run: docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 17:13:37.663816    7653 cli_runner.go:211] docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 17:13:37.663922    7653 network_create.go:284] running [docker network inspect multinode-357000] to gather additional debugging logs...
	I0721 17:13:37.663942    7653 cli_runner.go:164] Run: docker network inspect multinode-357000
	W0721 17:13:37.680999    7653 cli_runner.go:211] docker network inspect multinode-357000 returned with exit code 1
	I0721 17:13:37.681031    7653 network_create.go:287] error running [docker network inspect multinode-357000]: docker network inspect multinode-357000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-357000 not found
	I0721 17:13:37.681045    7653 network_create.go:289] output of [docker network inspect multinode-357000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-357000 not found
	
	** /stderr **
	I0721 17:13:37.681176    7653 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 17:13:37.700590    7653 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:13:37.702086    7653 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:13:37.703630    7653 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:13:37.704975    7653 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:13:37.705389    7653 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013a2ce0}
	I0721 17:13:37.705403    7653 network_create.go:124] attempt to create docker network multinode-357000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0721 17:13:37.705473    7653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000
	I0721 17:13:37.768797    7653 network_create.go:108] docker network multinode-357000 192.168.85.0/24 created
	I0721 17:13:37.768838    7653 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-357000" container
	I0721 17:13:37.768964    7653 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 17:13:37.786748    7653 cli_runner.go:164] Run: docker volume create multinode-357000 --label name.minikube.sigs.k8s.io=multinode-357000 --label created_by.minikube.sigs.k8s.io=true
	I0721 17:13:37.804243    7653 oci.go:103] Successfully created a docker volume multinode-357000
	I0721 17:13:37.804364    7653 cli_runner.go:164] Run: docker run --rm --name multinode-357000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-357000 --entrypoint /usr/bin/test -v multinode-357000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 17:13:38.064747    7653 oci.go:107] Successfully prepared a docker volume multinode-357000
	I0721 17:13:38.064791    7653 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:13:38.064808    7653 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 17:13:38.064977    7653 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-357000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0721 17:19:37.624041    7653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 17:19:37.624115    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:37.643519    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:37.643630    7653 retry.go:31] will retry after 244.85208ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:37.888905    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:37.907755    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:37.907876    7653 retry.go:31] will retry after 192.280916ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:38.102590    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:38.122320    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:38.122418    7653 retry.go:31] will retry after 531.614174ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:38.656392    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:38.675930    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:38.676028    7653 retry.go:31] will retry after 837.854877ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:39.514443    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:39.533954    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:19:39.534062    7653 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:19:39.534079    7653 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:39.534146    7653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 17:19:39.534207    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:39.551289    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:39.551385    7653 retry.go:31] will retry after 261.883538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:39.815686    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:39.835457    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:39.835573    7653 retry.go:31] will retry after 443.798765ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:40.281729    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:40.300422    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:40.300524    7653 retry.go:31] will retry after 683.193727ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:40.986095    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:41.005890    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:19:41.005994    7653 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:19:41.006016    7653 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:41.006027    7653 start.go:128] duration metric: took 6m3.403964111s to createHost
	I0721 17:19:41.006102    7653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 17:19:41.006160    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:41.023076    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:41.023167    7653 retry.go:31] will retry after 168.025257ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:41.193637    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:41.212637    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:41.212748    7653 retry.go:31] will retry after 334.341693ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:41.548455    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:41.568809    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:41.568909    7653 retry.go:31] will retry after 475.61368ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:42.045403    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:42.064636    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:42.064736    7653 retry.go:31] will retry after 445.075501ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:42.512239    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:42.532333    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:19:42.532435    7653 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:19:42.532456    7653 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:42.532516    7653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 17:19:42.532582    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:42.550402    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:42.550499    7653 retry.go:31] will retry after 208.527282ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:42.761433    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:42.781182    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:42.781289    7653 retry.go:31] will retry after 547.093279ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:43.330386    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:43.348301    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:19:43.348408    7653 retry.go:31] will retry after 769.56317ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:44.120073    7653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:19:44.140094    7653 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:19:44.140199    7653 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:19:44.140215    7653 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:19:44.140230    7653 fix.go:56] duration metric: took 6m25.778874126s for fixHost
	I0721 17:19:44.140237    7653 start.go:83] releasing machines lock for "multinode-357000", held for 6m25.778943608s
	W0721 17:19:44.140322    7653 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-357000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-357000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0721 17:19:44.183803    7653 out.go:177] 
	W0721 17:19:44.204923    7653 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0721 17:19:44.205006    7653 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0721 17:19:44.205054    7653 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0721 17:19:44.227007    7653 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-357000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "cee248cbd19547c754e3bbab608db08b233a03a920e747ee5088d9c11c6eeac0",
	        "Created": "2024-07-22T00:13:37.721144313Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (73.592719ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:19:44.394999    8219 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (755.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (102.005951ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-357000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- rollout status deployment/busybox: exit status 1 (98.515977ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.131148ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.807227ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.867719ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.992534ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.323326ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0721 17:19:54.381716    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.042538ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.500317ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.169023ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.884166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.60184ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (98.118105ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- exec  -- nslookup kubernetes.io: exit status 1 (101.269167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- exec  -- nslookup kubernetes.default: exit status 1 (99.374479ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (99.310169ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "cee248cbd19547c754e3bbab608db08b233a03a920e747ee5088d9c11c6eeac0",
	        "Created": "2024-07-22T00:13:37.721144313Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (73.58841ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:21:08.396922    8332 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (84.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-357000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (98.649377ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-357000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "cee248cbd19547c754e3bbab608db08b233a03a920e747ee5088d9c11c6eeac0",
	        "Created": "2024-07-22T00:13:37.721144313Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (73.156904ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:21:08.589548    8339 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-357000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-357000 -v 3 --alsologtostderr: exit status 80 (159.69549ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:08.644432    8342 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:08.645368    8342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:08.645374    8342 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:08.645378    8342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:08.645572    8342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:08.645898    8342 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:08.646154    8342 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:08.646539    8342 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:21:08.663454    8342 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:21:08.685281    8342 out.go:177] 
	W0721 17:21:08.707037    8342 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-357000 host status: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-357000 host status: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	I0721 17:21:08.727776    8342 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-357000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "cee248cbd19547c754e3bbab608db08b233a03a920e747ee5088d9c11c6eeac0",
	        "Created": "2024-07-22T00:13:37.721144313Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (72.224945ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:21:08.842850    8346 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-357000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-357000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (37.551513ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-357000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-357000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-357000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "cee248cbd19547c754e3bbab608db08b233a03a920e747ee5088d9c11c6eeac0",
	        "Created": "2024-07-22T00:13:37.721144313Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (73.196777ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:21:08.974806    8351 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-357000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-1-349000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-357000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-357000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-357000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "cee248cbd19547c754e3bbab608db08b233a03a920e747ee5088d9c11c6eeac0",
	        "Created": "2024-07-22T00:13:37.721144313Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (73.763258ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:21:09.184271    8359 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status --output json --alsologtostderr: exit status 7 (72.685866ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-357000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:09.238562    8362 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:09.238743    8362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:09.238748    8362 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:09.238752    8362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:09.238922    8362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:09.239097    8362 out.go:298] Setting JSON to true
	I0721 17:21:09.239118    8362 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:09.239155    8362 notify.go:220] Checking for updates...
	I0721 17:21:09.239384    8362 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:09.239402    8362 status.go:255] checking status of multinode-357000 ...
	I0721 17:21:09.239775    8362 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:21:09.256974    8362 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:21:09.257037    8362 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:21:09.257055    8362 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:21:09.257075    8362 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:21:09.257081    8362 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-357000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "cee248cbd19547c754e3bbab608db08b233a03a920e747ee5088d9c11c6eeac0",
	        "Created": "2024-07-22T00:13:37.721144313Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (72.677781ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:21:09.350643    8366 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 node stop m03: exit status 85 (147.340417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-357000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status: exit status 7 (73.382425ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:21:09.572087    8371 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:21:09.572097    8371 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status --alsologtostderr: exit status 7 (72.662163ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:09.626101    8374 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:09.626378    8374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:09.626383    8374 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:09.626387    8374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:09.626588    8374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:09.626765    8374 out.go:298] Setting JSON to false
	I0721 17:21:09.626786    8374 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:09.626824    8374 notify.go:220] Checking for updates...
	I0721 17:21:09.627049    8374 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:09.627065    8374 status.go:255] checking status of multinode-357000 ...
	I0721 17:21:09.627435    8374 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:21:09.644759    8374 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:21:09.644829    8374 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:21:09.644851    8374 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:21:09.644879    8374 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:21:09.644885    8374 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-357000 status --alsologtostderr": multinode-357000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-357000 status --alsologtostderr": multinode-357000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-357000 status --alsologtostderr": multinode-357000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "cee248cbd19547c754e3bbab608db08b233a03a920e747ee5088d9c11c6eeac0",
	        "Created": "2024-07-22T00:13:37.721144313Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (73.048122ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:21:09.738888    8378 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (47.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 node start m03 -v=7 --alsologtostderr: exit status 85 (143.829716ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:09.792934    8381 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:09.793325    8381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:09.793331    8381 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:09.793335    8381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:09.793526    8381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:09.793856    8381 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:09.794127    8381 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:09.816357    8381 out.go:177] 
	W0721 17:21:09.836989    8381 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0721 17:21:09.837002    8381 out.go:239] * 
	* 
	W0721 17:21:09.839659    8381 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0721 17:21:09.860906    8381 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0721 17:21:09.792934    8381 out.go:291] Setting OutFile to fd 1 ...
I0721 17:21:09.793325    8381 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 17:21:09.793331    8381 out.go:304] Setting ErrFile to fd 2...
I0721 17:21:09.793335    8381 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 17:21:09.793526    8381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
I0721 17:21:09.793856    8381 mustload.go:65] Loading cluster: multinode-357000
I0721 17:21:09.794127    8381 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 17:21:09.816357    8381 out.go:177] 
W0721 17:21:09.836989    8381 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0721 17:21:09.837002    8381 out.go:239] * 
* 
W0721 17:21:09.839659    8381 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0721 17:21:09.860906    8381 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-357000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr: exit status 7 (72.946839ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:09.937431    8383 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:09.937615    8383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:09.937621    8383 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:09.937625    8383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:09.937819    8383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:09.937992    8383 out.go:298] Setting JSON to false
	I0721 17:21:09.938015    8383 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:09.938048    8383 notify.go:220] Checking for updates...
	I0721 17:21:09.938271    8383 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:09.938287    8383 status.go:255] checking status of multinode-357000 ...
	I0721 17:21:09.938670    8383 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:21:09.956013    8383 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:21:09.956094    8383 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:21:09.956113    8383 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:21:09.956135    8383 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:21:09.956140    8383 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr: exit status 7 (79.971331ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:11.456696    8388 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:11.456914    8388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:11.456940    8388 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:11.456943    8388 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:11.457126    8388 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:11.457322    8388 out.go:298] Setting JSON to false
	I0721 17:21:11.457366    8388 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:11.457398    8388 notify.go:220] Checking for updates...
	I0721 17:21:11.457624    8388 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:11.457641    8388 status.go:255] checking status of multinode-357000 ...
	I0721 17:21:11.458062    8388 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:21:11.477218    8388 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:21:11.477295    8388 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:21:11.477315    8388 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:21:11.477336    8388 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:21:11.477344    8388 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr: exit status 7 (79.971346ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:13.378584    8391 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:13.378868    8391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:13.378877    8391 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:13.378881    8391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:13.379051    8391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:13.379237    8391 out.go:298] Setting JSON to false
	I0721 17:21:13.379259    8391 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:13.379296    8391 notify.go:220] Checking for updates...
	I0721 17:21:13.379512    8391 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:13.379530    8391 status.go:255] checking status of multinode-357000 ...
	I0721 17:21:13.379916    8391 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:21:13.398160    8391 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:21:13.398227    8391 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:21:13.398246    8391 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:21:13.398270    8391 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:21:13.398277    8391 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr: exit status 7 (78.972721ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:16.306024    8398 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:16.306303    8398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:16.306308    8398 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:16.306312    8398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:16.306502    8398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:16.306732    8398 out.go:298] Setting JSON to false
	I0721 17:21:16.306766    8398 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:16.306818    8398 notify.go:220] Checking for updates...
	I0721 17:21:16.307036    8398 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:16.307056    8398 status.go:255] checking status of multinode-357000 ...
	I0721 17:21:16.307443    8398 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:21:16.325741    8398 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:21:16.325807    8398 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:21:16.325827    8398 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:21:16.325849    8398 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:21:16.325858    8398 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr: exit status 7 (76.779335ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:20.430007    8407 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:20.430288    8407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:20.430294    8407 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:20.430298    8407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:20.430460    8407 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:20.430637    8407 out.go:298] Setting JSON to false
	I0721 17:21:20.430662    8407 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:20.430694    8407 notify.go:220] Checking for updates...
	I0721 17:21:20.430926    8407 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:20.430943    8407 status.go:255] checking status of multinode-357000 ...
	I0721 17:21:20.431417    8407 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:21:20.449464    8407 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:21:20.449520    8407 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:21:20.449547    8407 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:21:20.449578    8407 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:21:20.449584    8407 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr: exit status 7 (79.99309ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:24.775393    8412 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:24.776177    8412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:24.776186    8412 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:24.776192    8412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:24.776720    8412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:24.776916    8412 out.go:298] Setting JSON to false
	I0721 17:21:24.776939    8412 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:24.776979    8412 notify.go:220] Checking for updates...
	I0721 17:21:24.777196    8412 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:24.777213    8412 status.go:255] checking status of multinode-357000 ...
	I0721 17:21:24.777611    8412 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:21:24.795715    8412 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:21:24.795780    8412 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:21:24.795804    8412 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:21:24.795827    8412 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:21:24.795847    8412 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr: exit status 7 (79.559875ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:34.700886    8421 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:34.701068    8421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:34.701073    8421 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:34.701077    8421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:34.701232    8421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:34.701411    8421 out.go:298] Setting JSON to false
	I0721 17:21:34.701434    8421 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:34.701474    8421 notify.go:220] Checking for updates...
	I0721 17:21:34.701723    8421 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:34.701740    8421 status.go:255] checking status of multinode-357000 ...
	I0721 17:21:34.702126    8421 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:21:34.719761    8421 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:21:34.719828    8421 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:21:34.719849    8421 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:21:34.719867    8421 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:21:34.719877    8421 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr: exit status 7 (77.681541ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:45.814276    8439 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:45.814479    8439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:45.814484    8439 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:45.814488    8439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:45.814670    8439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:45.814865    8439 out.go:298] Setting JSON to false
	I0721 17:21:45.814888    8439 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:45.814923    8439 notify.go:220] Checking for updates...
	I0721 17:21:45.815163    8439 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:45.815181    8439 status.go:255] checking status of multinode-357000 ...
	I0721 17:21:45.815630    8439 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:21:45.833151    8439 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:21:45.833209    8439 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:21:45.833228    8439 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:21:45.833254    8439 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:21:45.833262    8439 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr: exit status 7 (75.61031ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:21:57.334670    8450 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:21:57.334874    8450 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:57.334880    8450 out.go:304] Setting ErrFile to fd 2...
	I0721 17:21:57.334883    8450 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:21:57.335049    8450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:21:57.335224    8450 out.go:298] Setting JSON to false
	I0721 17:21:57.335246    8450 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:21:57.335285    8450 notify.go:220] Checking for updates...
	I0721 17:21:57.335527    8450 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:21:57.335543    8450 status.go:255] checking status of multinode-357000 ...
	I0721 17:21:57.335931    8450 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:21:57.353657    8450 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:21:57.353733    8450 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:21:57.353763    8450 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:21:57.353800    8450 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:21:57.353808    8450 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-357000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "cee248cbd19547c754e3bbab608db08b233a03a920e747ee5088d9c11c6eeac0",
	        "Created": "2024-07-22T00:13:37.721144313Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (72.994273ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:21:57.447834    8454 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (47.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (791.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-357000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-357000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-357000: exit status 82 (15.082519047s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-357000"  ...
	* Stopping node "multinode-357000"  ...
	* Stopping node "multinode-357000"  ...
	* Stopping node "multinode-357000"  ...
	* Stopping node "multinode-357000"  ...
	* Stopping node "multinode-357000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-357000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-357000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-357000 --wait=true -v=8 --alsologtostderr
E0721 17:23:57.669175    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 17:24:14.617548    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 17:24:54.381351    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 17:29:14.616510    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 17:29:37.430839    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 17:29:54.381989    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 17:34:14.617583    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 17:34:54.383074    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-357000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m56.086126109s)

                                                
                                                
-- stdout --
	* [multinode-357000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-357000" primary control-plane node in "multinode-357000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* docker "multinode-357000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-357000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:22:12.640400    8477 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:22:12.640575    8477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:22:12.640581    8477 out.go:304] Setting ErrFile to fd 2...
	I0721 17:22:12.640592    8477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:22:12.640762    8477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:22:12.642219    8477 out.go:298] Setting JSON to false
	I0721 17:22:12.664808    8477 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4902,"bootTime":1721602830,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0721 17:22:12.664903    8477 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:22:12.686862    8477 out.go:177] * [multinode-357000] minikube v1.33.1 on Darwin 14.5
	I0721 17:22:12.729591    8477 notify.go:220] Checking for updates...
	I0721 17:22:12.750579    8477 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:22:12.772183    8477 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	I0721 17:22:12.793617    8477 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0721 17:22:12.815514    8477 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:22:12.836373    8477 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	I0721 17:22:12.878535    8477 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:22:12.899485    8477 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:22:12.899581    8477 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:22:12.923150    8477 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0721 17:22:12.923330    8477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 17:22:13.004048    8477 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:98 SystemTime:2024-07-22 00:22:12.994599387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 17:22:13.026123    8477 out.go:177] * Using the docker driver based on existing profile
	I0721 17:22:13.048056    8477 start.go:297] selected driver: docker
	I0721 17:22:13.048087    8477 start.go:901] validating driver "docker" against &{Name:multinode-357000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-357000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:22:13.048212    8477 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:22:13.048416    8477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 17:22:13.129235    8477 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:98 SystemTime:2024-07-22 00:22:13.120527437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 17:22:13.132319    8477 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:22:13.132355    8477 cni.go:84] Creating CNI manager for ""
	I0721 17:22:13.132364    8477 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0721 17:22:13.132439    8477 start.go:340] cluster config:
	{Name:multinode-357000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-357000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:22:13.153678    8477 out.go:177] * Starting "multinode-357000" primary control-plane node in "multinode-357000" cluster
	I0721 17:22:13.174935    8477 cache.go:121] Beginning downloading kic base image for docker with docker
	I0721 17:22:13.195965    8477 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0721 17:22:13.216578    8477 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:22:13.216637    8477 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0721 17:22:13.216656    8477 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 17:22:13.216676    8477 cache.go:56] Caching tarball of preloaded images
	I0721 17:22:13.216898    8477 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 17:22:13.216917    8477 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:22:13.217566    8477 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/multinode-357000/config.json ...
	W0721 17:22:13.242868    8477 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0721 17:22:13.242891    8477 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0721 17:22:13.243013    8477 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0721 17:22:13.243032    8477 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0721 17:22:13.243046    8477 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0721 17:22:13.243056    8477 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0721 17:22:13.243061    8477 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0721 17:22:13.246070    8477 image.go:273] response: 
	I0721 17:22:13.880270    8477 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0721 17:22:13.880318    8477 cache.go:194] Successfully downloaded all kic artifacts
	I0721 17:22:13.880363    8477 start.go:360] acquireMachinesLock for multinode-357000: {Name:mkcac3380918714b218acb546d0dc62757d251e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:22:13.880463    8477 start.go:364] duration metric: took 81.702µs to acquireMachinesLock for "multinode-357000"
	I0721 17:22:13.880487    8477 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:22:13.880498    8477 fix.go:54] fixHost starting: 
	I0721 17:22:13.880759    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:22:13.897791    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:22:13.897871    8477 fix.go:112] recreateIfNeeded on multinode-357000: state= err=unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:13.897890    8477 fix.go:117] machineExists: false. err=machine does not exist
	I0721 17:22:13.939690    8477 out.go:177] * docker "multinode-357000" container is missing, will recreate.
	I0721 17:22:13.960464    8477 delete.go:124] DEMOLISHING multinode-357000 ...
	I0721 17:22:13.960567    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:22:13.977479    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	W0721 17:22:13.977528    8477 stop.go:83] unable to get state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:13.977543    8477 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:13.977931    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:22:13.994928    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:22:13.994978    8477 delete.go:82] Unable to get host status for multinode-357000, assuming it has already been deleted: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:13.995061    8477 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-357000
	W0721 17:22:14.011825    8477 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-357000 returned with exit code 1
	I0721 17:22:14.011857    8477 kic.go:371] could not find the container multinode-357000 to remove it. will try anyways
	I0721 17:22:14.011932    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:22:14.028926    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	W0721 17:22:14.028974    8477 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:14.029051    8477 cli_runner.go:164] Run: docker exec --privileged -t multinode-357000 /bin/bash -c "sudo init 0"
	W0721 17:22:14.046108    8477 cli_runner.go:211] docker exec --privileged -t multinode-357000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0721 17:22:14.046142    8477 oci.go:650] error shutdown multinode-357000: docker exec --privileged -t multinode-357000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:15.046469    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:22:15.064051    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:22:15.064107    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:15.064126    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:22:15.064160    8477 retry.go:31] will retry after 644.086152ms: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:15.708560    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:22:15.725812    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:22:15.725859    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:15.725869    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:22:15.725893    8477 retry.go:31] will retry after 1.035622483s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:16.761962    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:22:16.838062    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:22:16.838106    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:16.838117    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:22:16.838139    8477 retry.go:31] will retry after 1.596531273s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:18.436743    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:22:18.453579    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:22:18.453630    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:18.453641    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:22:18.453667    8477 retry.go:31] will retry after 1.500373098s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:19.955035    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:22:19.972110    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:22:19.972154    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:19.972161    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:22:19.972184    8477 retry.go:31] will retry after 3.500967833s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:23.474195    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:22:23.495080    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:22:23.495126    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:23.495137    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:22:23.495159    8477 retry.go:31] will retry after 4.943599926s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:28.441207    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:22:28.460382    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:22:28.460432    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:28.460444    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:22:28.460465    8477 retry.go:31] will retry after 4.184137726s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:32.645016    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:22:32.667325    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:22:32.667380    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:22:32.667389    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:22:32.667441    8477 oci.go:88] couldn't shut down multinode-357000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	 
	I0721 17:22:32.667522    8477 cli_runner.go:164] Run: docker rm -f -v multinode-357000
	I0721 17:22:32.685540    8477 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-357000
	W0721 17:22:32.703250    8477 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-357000 returned with exit code 1
	I0721 17:22:32.703361    8477 cli_runner.go:164] Run: docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 17:22:32.721137    8477 cli_runner.go:164] Run: docker network rm multinode-357000
	I0721 17:22:32.802635    8477 fix.go:124] Sleeping 1 second for extra luck!
	I0721 17:22:33.804803    8477 start.go:125] createHost starting for "" (driver="docker")
	I0721 17:22:33.827135    8477 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0721 17:22:33.827341    8477 start.go:159] libmachine.API.Create for "multinode-357000" (driver="docker")
	I0721 17:22:33.827400    8477 client.go:168] LocalClient.Create starting
	I0721 17:22:33.827603    8477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 17:22:33.827704    8477 main.go:141] libmachine: Decoding PEM data...
	I0721 17:22:33.827745    8477 main.go:141] libmachine: Parsing certificate...
	I0721 17:22:33.827845    8477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 17:22:33.827932    8477 main.go:141] libmachine: Decoding PEM data...
	I0721 17:22:33.827948    8477 main.go:141] libmachine: Parsing certificate...
	I0721 17:22:33.828849    8477 cli_runner.go:164] Run: docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 17:22:33.848427    8477 cli_runner.go:211] docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 17:22:33.848526    8477 network_create.go:284] running [docker network inspect multinode-357000] to gather additional debugging logs...
	I0721 17:22:33.848542    8477 cli_runner.go:164] Run: docker network inspect multinode-357000
	W0721 17:22:33.866030    8477 cli_runner.go:211] docker network inspect multinode-357000 returned with exit code 1
	I0721 17:22:33.866061    8477 network_create.go:287] error running [docker network inspect multinode-357000]: docker network inspect multinode-357000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-357000 not found
	I0721 17:22:33.866071    8477 network_create.go:289] output of [docker network inspect multinode-357000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-357000 not found
	
	** /stderr **
	I0721 17:22:33.866186    8477 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 17:22:33.884806    8477 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:22:33.886411    8477 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:22:33.886753    8477 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016df1d0}
	I0721 17:22:33.886774    8477 network_create.go:124] attempt to create docker network multinode-357000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0721 17:22:33.886839    8477 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000
	W0721 17:22:33.906535    8477 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000 returned with exit code 1
	W0721 17:22:33.906583    8477 network_create.go:149] failed to create docker network multinode-357000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0721 17:22:33.906599    8477 network_create.go:116] failed to create docker network multinode-357000 192.168.67.0/24, will retry: subnet is taken
	I0721 17:22:33.907967    8477 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:22:33.908339    8477 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017316d0}
	I0721 17:22:33.908353    8477 network_create.go:124] attempt to create docker network multinode-357000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0721 17:22:33.908422    8477 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000
	I0721 17:22:33.972021    8477 network_create.go:108] docker network multinode-357000 192.168.76.0/24 created
	I0721 17:22:33.972057    8477 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-357000" container
	I0721 17:22:33.972178    8477 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 17:22:33.990070    8477 cli_runner.go:164] Run: docker volume create multinode-357000 --label name.minikube.sigs.k8s.io=multinode-357000 --label created_by.minikube.sigs.k8s.io=true
	I0721 17:22:34.007226    8477 oci.go:103] Successfully created a docker volume multinode-357000
	I0721 17:22:34.007358    8477 cli_runner.go:164] Run: docker run --rm --name multinode-357000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-357000 --entrypoint /usr/bin/test -v multinode-357000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 17:22:34.270679    8477 oci.go:107] Successfully prepared a docker volume multinode-357000
	I0721 17:22:34.270732    8477 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:22:34.270749    8477 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 17:22:34.270888    8477 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-357000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0721 17:28:33.827780    8477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 17:28:33.827917    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:33.848552    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:33.848671    8477 retry.go:31] will retry after 272.579827ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:34.123448    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:34.143241    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:34.143342    8477 retry.go:31] will retry after 556.148304ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:34.701893    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:34.721606    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:34.721729    8477 retry.go:31] will retry after 552.005916ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:35.276146    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:35.296177    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:28:35.296314    8477 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:28:35.296335    8477 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:35.296400    8477 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 17:28:35.296455    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:35.313895    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:35.313994    8477 retry.go:31] will retry after 226.570819ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:35.542330    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:35.562897    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:35.563008    8477 retry.go:31] will retry after 246.048523ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:35.809380    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:35.829526    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:35.829634    8477 retry.go:31] will retry after 409.515591ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:36.240291    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:36.260254    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:36.260350    8477 retry.go:31] will retry after 439.061214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:36.701810    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:36.721122    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:28:36.721224    8477 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:28:36.721244    8477 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:36.721263    8477 start.go:128] duration metric: took 6m2.916370017s to createHost
	I0721 17:28:36.721333    8477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 17:28:36.721402    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:36.739098    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:36.739190    8477 retry.go:31] will retry after 279.407459ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:37.018958    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:37.038735    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:37.038828    8477 retry.go:31] will retry after 480.40128ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:37.519908    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:37.539742    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:37.539838    8477 retry.go:31] will retry after 422.160621ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:37.964179    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:37.984352    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:28:37.984460    8477 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:28:37.984496    8477 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:37.984560    8477 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 17:28:37.984615    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:38.002112    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:38.002201    8477 retry.go:31] will retry after 302.567063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:38.307162    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:38.326944    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:38.327037    8477 retry.go:31] will retry after 450.559891ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:38.777790    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:38.796532    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:28:38.796625    8477 retry.go:31] will retry after 731.768915ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:39.529782    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:28:39.549326    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:28:39.549430    8477 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:28:39.549449    8477 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:39.549460    8477 fix.go:56] duration metric: took 6m25.668939213s for fixHost
	I0721 17:28:39.549466    8477 start.go:83] releasing machines lock for "multinode-357000", held for 6m25.668970481s
	W0721 17:28:39.549481    8477 start.go:714] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0721 17:28:39.549544    8477 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0721 17:28:39.549550    8477 start.go:729] Will try again in 5 seconds ...
	I0721 17:28:44.551463    8477 start.go:360] acquireMachinesLock for multinode-357000: {Name:mkcac3380918714b218acb546d0dc62757d251e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:28:44.551662    8477 start.go:364] duration metric: took 159.017µs to acquireMachinesLock for "multinode-357000"
	I0721 17:28:44.551700    8477 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:28:44.551708    8477 fix.go:54] fixHost starting: 
	I0721 17:28:44.552126    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:28:44.571561    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:28:44.571606    8477 fix.go:112] recreateIfNeeded on multinode-357000: state= err=unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:44.571621    8477 fix.go:117] machineExists: false. err=machine does not exist
	I0721 17:28:44.593670    8477 out.go:177] * docker "multinode-357000" container is missing, will recreate.
	I0721 17:28:44.636267    8477 delete.go:124] DEMOLISHING multinode-357000 ...
	I0721 17:28:44.636476    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:28:44.655237    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	W0721 17:28:44.655281    8477 stop.go:83] unable to get state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:44.655307    8477 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:44.655708    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:28:44.672741    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:28:44.672791    8477 delete.go:82] Unable to get host status for multinode-357000, assuming it has already been deleted: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:44.672887    8477 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-357000
	W0721 17:28:44.689932    8477 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-357000 returned with exit code 1
	I0721 17:28:44.689973    8477 kic.go:371] could not find the container multinode-357000 to remove it. will try anyways
	I0721 17:28:44.690052    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:28:44.707261    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	W0721 17:28:44.707305    8477 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:44.707389    8477 cli_runner.go:164] Run: docker exec --privileged -t multinode-357000 /bin/bash -c "sudo init 0"
	W0721 17:28:44.724367    8477 cli_runner.go:211] docker exec --privileged -t multinode-357000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0721 17:28:44.724399    8477 oci.go:650] error shutdown multinode-357000: docker exec --privileged -t multinode-357000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:45.725437    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:28:45.744710    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:28:45.744755    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:45.744767    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:28:45.744802    8477 retry.go:31] will retry after 547.296555ms: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:46.292443    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:28:46.311858    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:28:46.311900    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:46.311911    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:28:46.311937    8477 retry.go:31] will retry after 541.752393ms: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:46.853970    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:28:46.873465    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:28:46.873515    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:46.873524    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:28:46.873550    8477 retry.go:31] will retry after 1.654585756s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:48.529293    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:28:48.548874    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:28:48.548918    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:48.548928    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:28:48.548953    8477 retry.go:31] will retry after 1.244881883s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:49.796162    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:28:49.815883    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:28:49.815932    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:49.815942    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:28:49.815967    8477 retry.go:31] will retry after 1.578002743s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:51.396429    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:28:51.416167    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:28:51.416211    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:51.416221    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:28:51.416244    8477 retry.go:31] will retry after 4.250015357s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:55.667083    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:28:55.686838    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:28:55.686881    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:28:55.686892    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:28:55.686918    8477 retry.go:31] will retry after 6.04273634s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:29:01.730844    8477 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:29:01.751011    8477 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:29:01.751053    8477 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:29:01.751067    8477 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:29:01.751097    8477 oci.go:88] couldn't shut down multinode-357000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	 
	I0721 17:29:01.751175    8477 cli_runner.go:164] Run: docker rm -f -v multinode-357000
	I0721 17:29:01.769699    8477 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-357000
	W0721 17:29:01.839608    8477 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-357000 returned with exit code 1
	I0721 17:29:01.839771    8477 cli_runner.go:164] Run: docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 17:29:01.857515    8477 cli_runner.go:164] Run: docker network rm multinode-357000
	I0721 17:29:01.935957    8477 fix.go:124] Sleeping 1 second for extra luck!
	I0721 17:29:02.936150    8477 start.go:125] createHost starting for "" (driver="docker")
	I0721 17:29:02.958305    8477 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0721 17:29:02.958444    8477 start.go:159] libmachine.API.Create for "multinode-357000" (driver="docker")
	I0721 17:29:02.958471    8477 client.go:168] LocalClient.Create starting
	I0721 17:29:02.958649    8477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 17:29:02.958796    8477 main.go:141] libmachine: Decoding PEM data...
	I0721 17:29:02.958815    8477 main.go:141] libmachine: Parsing certificate...
	I0721 17:29:02.958877    8477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 17:29:02.958937    8477 main.go:141] libmachine: Decoding PEM data...
	I0721 17:29:02.958948    8477 main.go:141] libmachine: Parsing certificate...
	I0721 17:29:02.980893    8477 cli_runner.go:164] Run: docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 17:29:03.000140    8477 cli_runner.go:211] docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 17:29:03.000231    8477 network_create.go:284] running [docker network inspect multinode-357000] to gather additional debugging logs...
	I0721 17:29:03.000250    8477 cli_runner.go:164] Run: docker network inspect multinode-357000
	W0721 17:29:03.017624    8477 cli_runner.go:211] docker network inspect multinode-357000 returned with exit code 1
	I0721 17:29:03.017654    8477 network_create.go:287] error running [docker network inspect multinode-357000]: docker network inspect multinode-357000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-357000 not found
	I0721 17:29:03.017665    8477 network_create.go:289] output of [docker network inspect multinode-357000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-357000 not found
	
	** /stderr **
	I0721 17:29:03.017794    8477 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 17:29:03.036758    8477 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:29:03.038368    8477 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:29:03.040008    8477 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:29:03.041459    8477 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:29:03.042046    8477 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001467f10}
	I0721 17:29:03.042064    8477 network_create.go:124] attempt to create docker network multinode-357000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0721 17:29:03.042214    8477 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000
	I0721 17:29:03.105718    8477 network_create.go:108] docker network multinode-357000 192.168.85.0/24 created
	I0721 17:29:03.105756    8477 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-357000" container
	I0721 17:29:03.105870    8477 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 17:29:03.123977    8477 cli_runner.go:164] Run: docker volume create multinode-357000 --label name.minikube.sigs.k8s.io=multinode-357000 --label created_by.minikube.sigs.k8s.io=true
	I0721 17:29:03.141891    8477 oci.go:103] Successfully created a docker volume multinode-357000
	I0721 17:29:03.142010    8477 cli_runner.go:164] Run: docker run --rm --name multinode-357000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-357000 --entrypoint /usr/bin/test -v multinode-357000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 17:29:03.401752    8477 oci.go:107] Successfully prepared a docker volume multinode-357000
	I0721 17:29:03.401801    8477 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:29:03.401818    8477 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 17:29:03.401975    8477 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-357000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir
	I0721 17:35:02.960954    8477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 17:35:02.961092    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:02.980972    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:02.981084    8477 retry.go:31] will retry after 151.613477ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:03.135147    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:03.153718    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:03.153831    8477 retry.go:31] will retry after 322.269817ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:03.476608    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:03.496204    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:03.496308    8477 retry.go:31] will retry after 292.397019ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:03.789673    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:03.809030    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:03.809129    8477 retry.go:31] will retry after 528.491537ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:04.338435    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:04.358377    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:35:04.358496    8477 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:35:04.358514    8477 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:04.358575    8477 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 17:35:04.358635    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:04.376515    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:04.376622    8477 retry.go:31] will retry after 253.492019ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:04.632564    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:04.652636    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:04.652733    8477 retry.go:31] will retry after 537.288995ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:05.191983    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:05.211445    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:05.211543    8477 retry.go:31] will retry after 468.401691ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:05.680319    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:05.700495    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:35:05.700598    8477 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:35:05.700618    8477 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:05.700630    8477 start.go:128] duration metric: took 6m2.764418996s to createHost
	I0721 17:35:05.700698    8477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 17:35:05.700764    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:05.718240    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:05.718349    8477 retry.go:31] will retry after 335.665861ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:06.056521    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:06.076717    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:06.076809    8477 retry.go:31] will retry after 530.954542ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:06.610195    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:06.629515    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:06.629614    8477 retry.go:31] will retry after 461.198063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:07.093293    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:07.113097    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:35:07.113196    8477 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:35:07.113211    8477 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:07.113275    8477 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0721 17:35:07.113356    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:07.130257    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:07.130353    8477 retry.go:31] will retry after 364.651377ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:07.496038    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:07.515698    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:07.515804    8477 retry.go:31] will retry after 413.894425ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:07.932105    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:07.952002    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	I0721 17:35:07.952108    8477 retry.go:31] will retry after 530.613794ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:08.485078    8477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000
	W0721 17:35:08.504712    8477 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000 returned with exit code 1
	W0721 17:35:08.504809    8477 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	W0721 17:35:08.504827    8477 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-357000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-357000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:08.504838    8477 fix.go:56] duration metric: took 6m23.95310654s for fixHost
	I0721 17:35:08.504844    8477 start.go:83] releasing machines lock for "multinode-357000", held for 6m23.953144873s
	W0721 17:35:08.504913    8477 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-357000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-357000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0721 17:35:08.547470    8477 out.go:177] 
	W0721 17:35:08.568558    8477 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0721 17:35:08.568627    8477 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0721 17:35:08.568672    8477 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0721 17:35:08.611562    8477 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-357000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-357000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "a623b1937d96ac0065fc181de8ccd24cb3001fc72c3a0e66a10b9d17c1393f54",
	        "Created": "2024-07-22T00:29:03.058077891Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (74.842904ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:35:08.842046    9325 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (791.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 node delete m03: exit status 80 (160.994123ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-357000 host status: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-357000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status --alsologtostderr: exit status 7 (73.332048ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:35:09.057875    9331 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:35:09.058070    9331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:35:09.058076    9331 out.go:304] Setting ErrFile to fd 2...
	I0721 17:35:09.058079    9331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:35:09.058258    9331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:35:09.058431    9331 out.go:298] Setting JSON to false
	I0721 17:35:09.058453    9331 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:35:09.058492    9331 notify.go:220] Checking for updates...
	I0721 17:35:09.058721    9331 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:35:09.058738    9331 status.go:255] checking status of multinode-357000 ...
	I0721 17:35:09.059147    9331 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:09.076580    9331 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:35:09.076649    9331 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:35:09.076672    9331 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:35:09.076693    9331 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:35:09.076699    9331 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-357000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "a623b1937d96ac0065fc181de8ccd24cb3001fc72c3a0e66a10b9d17c1393f54",
	        "Created": "2024-07-22T00:29:03.058077891Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (73.618456ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:35:09.171555    9335 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (11.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 stop: exit status 82 (11.6026482s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-357000"  ...
	* Stopping node "multinode-357000"  ...
	* Stopping node "multinode-357000"  ...
	* Stopping node "multinode-357000"  ...
	* Stopping node "multinode-357000"  ...
	* Stopping node "multinode-357000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-357000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-357000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status: exit status 7 (72.921096ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:35:20.847435    9348 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:35:20.847447    9348 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-357000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-357000 status --alsologtostderr: exit status 7 (72.449369ms)

                                                
                                                
-- stdout --
	multinode-357000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:35:20.901580    9351 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:35:20.901863    9351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:35:20.901868    9351 out.go:304] Setting ErrFile to fd 2...
	I0721 17:35:20.901872    9351 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:35:20.902049    9351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:35:20.902226    9351 out.go:298] Setting JSON to false
	I0721 17:35:20.902248    9351 mustload.go:65] Loading cluster: multinode-357000
	I0721 17:35:20.902283    9351 notify.go:220] Checking for updates...
	I0721 17:35:20.902509    9351 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:35:20.902526    9351 status.go:255] checking status of multinode-357000 ...
	I0721 17:35:20.902915    9351 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:20.919936    9351 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:35:20.919996    9351 status.go:330] multinode-357000 host status = "" (err=state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	)
	I0721 17:35:20.920016    9351 status.go:257] multinode-357000 status: &{Name:multinode-357000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0721 17:35:20.920035    9351 status.go:260] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	E0721 17:35:20.920043    9351 status.go:263] The "multinode-357000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-357000 status --alsologtostderr": multinode-357000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-357000 status --alsologtostderr": multinode-357000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "a623b1937d96ac0065fc181de8ccd24cb3001fc72c3a0e66a10b9d17c1393f54",
	        "Created": "2024-07-22T00:29:03.058077891Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (72.954834ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:35:21.014615    9355 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (11.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (108.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-357000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-357000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m48.276727472s)

                                                
                                                
-- stdout --
	* [multinode-357000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-357000" primary control-plane node in "multinode-357000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* docker "multinode-357000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 17:35:21.069267    9358 out.go:291] Setting OutFile to fd 1 ...
	I0721 17:35:21.069542    9358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:35:21.069547    9358 out.go:304] Setting ErrFile to fd 2...
	I0721 17:35:21.069551    9358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 17:35:21.069714    9358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 17:35:21.071209    9358 out.go:298] Setting JSON to false
	I0721 17:35:21.093639    9358 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5691,"bootTime":1721602830,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0721 17:35:21.093730    9358 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 17:35:21.115385    9358 out.go:177] * [multinode-357000] minikube v1.33.1 on Darwin 14.5
	I0721 17:35:21.157513    9358 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 17:35:21.157586    9358 notify.go:220] Checking for updates...
	I0721 17:35:21.200302    9358 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	I0721 17:35:21.221540    9358 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0721 17:35:21.245167    9358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 17:35:21.265340    9358 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	I0721 17:35:21.286356    9358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 17:35:21.308017    9358 config.go:182] Loaded profile config "multinode-357000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 17:35:21.308774    9358 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 17:35:21.332731    9358 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0721 17:35:21.333044    9358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 17:35:21.409626    9358 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:false NGoroutines:122 SystemTime:2024-07-22 00:35:21.400645243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 17:35:21.431681    9358 out.go:177] * Using the docker driver based on existing profile
	I0721 17:35:21.453150    9358 start.go:297] selected driver: docker
	I0721 17:35:21.453180    9358 start.go:901] validating driver "docker" against &{Name:multinode-357000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-357000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:35:21.453288    9358 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 17:35:21.453504    9358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 17:35:21.534460    9358 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:false NGoroutines:122 SystemTime:2024-07-22 00:35:21.52563939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 17:35:21.537492    9358 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0721 17:35:21.537553    9358 cni.go:84] Creating CNI manager for ""
	I0721 17:35:21.537565    9358 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0721 17:35:21.537627    9358 start.go:340] cluster config:
	{Name:multinode-357000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-357000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 17:35:21.580227    9358 out.go:177] * Starting "multinode-357000" primary control-plane node in "multinode-357000" cluster
	I0721 17:35:21.600903    9358 cache.go:121] Beginning downloading kic base image for docker with docker
	I0721 17:35:21.622114    9358 out.go:177] * Pulling base image v0.0.44-1721324606-19298 ...
	I0721 17:35:21.664011    9358 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:35:21.664060    9358 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0721 17:35:21.664090    9358 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0721 17:35:21.664109    9358 cache.go:56] Caching tarball of preloaded images
	I0721 17:35:21.664332    9358 preload.go:172] Found /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0721 17:35:21.664352    9358 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0721 17:35:21.665271    9358 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/multinode-357000/config.json ...
	W0721 17:35:21.689792    9358 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f is of wrong architecture
	I0721 17:35:21.689803    9358 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0721 17:35:21.689922    9358 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0721 17:35:21.689940    9358 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory, skipping pull
	I0721 17:35:21.689946    9358 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f exists in cache, skipping pull
	I0721 17:35:21.689956    9358 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f as a tarball
	I0721 17:35:21.689960    9358 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from local cache
	I0721 17:35:21.692978    9358 image.go:273] response: 
	I0721 17:35:22.340716    9358 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f from cached tarball
	I0721 17:35:22.340765    9358 cache.go:194] Successfully downloaded all kic artifacts
	I0721 17:35:22.340812    9358 start.go:360] acquireMachinesLock for multinode-357000: {Name:mkcac3380918714b218acb546d0dc62757d251e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0721 17:35:22.340926    9358 start.go:364] duration metric: took 95.038µs to acquireMachinesLock for "multinode-357000"
	I0721 17:35:22.340951    9358 start.go:96] Skipping create...Using existing machine configuration
	I0721 17:35:22.340961    9358 fix.go:54] fixHost starting: 
	I0721 17:35:22.341183    9358 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:22.358923    9358 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:35:22.358976    9358 fix.go:112] recreateIfNeeded on multinode-357000: state= err=unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:22.358993    9358 fix.go:117] machineExists: false. err=machine does not exist
	I0721 17:35:22.400639    9358 out.go:177] * docker "multinode-357000" container is missing, will recreate.
	I0721 17:35:22.421887    9358 delete.go:124] DEMOLISHING multinode-357000 ...
	I0721 17:35:22.421985    9358 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:22.439167    9358 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	W0721 17:35:22.439217    9358 stop.go:83] unable to get state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:22.439231    9358 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:22.439601    9358 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:22.456663    9358 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:35:22.456717    9358 delete.go:82] Unable to get host status for multinode-357000, assuming it has already been deleted: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:22.456804    9358 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-357000
	W0721 17:35:22.473933    9358 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-357000 returned with exit code 1
	I0721 17:35:22.473965    9358 kic.go:371] could not find the container multinode-357000 to remove it. will try anyways
	I0721 17:35:22.474038    9358 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:22.490786    9358 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	W0721 17:35:22.490843    9358 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:22.490930    9358 cli_runner.go:164] Run: docker exec --privileged -t multinode-357000 /bin/bash -c "sudo init 0"
	W0721 17:35:22.508202    9358 cli_runner.go:211] docker exec --privileged -t multinode-357000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0721 17:35:22.508242    9358 oci.go:650] error shutdown multinode-357000: docker exec --privileged -t multinode-357000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:23.508635    9358 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:23.525728    9358 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:35:23.525770    9358 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:23.525780    9358 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:35:23.525814    9358 retry.go:31] will retry after 449.587941ms: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:23.976082    9358 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:23.993299    9358 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:35:23.993350    9358 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:23.993360    9358 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:35:23.993383    9358 retry.go:31] will retry after 933.129408ms: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:24.926870    9358 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:24.944371    9358 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:35:24.944414    9358 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:24.944423    9358 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:35:24.944446    9358 retry.go:31] will retry after 961.782611ms: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:25.907206    9358 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:25.924187    9358 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:35:25.924231    9358 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:25.924243    9358 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:35:25.924268    9358 retry.go:31] will retry after 2.286166058s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:28.210761    9358 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:28.228315    9358 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:35:28.228358    9358 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:28.228375    9358 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:35:28.228402    9358 retry.go:31] will retry after 2.839494143s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:31.068178    9358 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:31.095700    9358 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:35:31.095753    9358 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:31.095764    9358 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:35:31.095798    9358 retry.go:31] will retry after 3.693517946s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:34.790981    9358 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:34.810438    9358 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:35:34.810480    9358 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:34.810488    9358 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:35:34.810514    9358 retry.go:31] will retry after 8.394205269s: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:43.207110    9358 cli_runner.go:164] Run: docker container inspect multinode-357000 --format={{.State.Status}}
	W0721 17:35:43.226832    9358 cli_runner.go:211] docker container inspect multinode-357000 --format={{.State.Status}} returned with exit code 1
	I0721 17:35:43.226884    9358 oci.go:662] temporary error verifying shutdown: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	I0721 17:35:43.226893    9358 oci.go:664] temporary error: container multinode-357000 status is  but expect it to be exited
	I0721 17:35:43.226926    9358 oci.go:88] couldn't shut down multinode-357000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000
	 
	I0721 17:35:43.227006    9358 cli_runner.go:164] Run: docker rm -f -v multinode-357000
	I0721 17:35:43.248094    9358 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-357000
	W0721 17:35:43.266455    9358 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-357000 returned with exit code 1
	I0721 17:35:43.266557    9358 cli_runner.go:164] Run: docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 17:35:43.284018    9358 cli_runner.go:164] Run: docker network rm multinode-357000
	I0721 17:35:43.366378    9358 fix.go:124] Sleeping 1 second for extra luck!
	I0721 17:35:44.368563    9358 start.go:125] createHost starting for "" (driver="docker")
	I0721 17:35:44.394630    9358 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0721 17:35:44.394826    9358 start.go:159] libmachine.API.Create for "multinode-357000" (driver="docker")
	I0721 17:35:44.394888    9358 client.go:168] LocalClient.Create starting
	I0721 17:35:44.395145    9358 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/ca.pem
	I0721 17:35:44.395246    9358 main.go:141] libmachine: Decoding PEM data...
	I0721 17:35:44.395294    9358 main.go:141] libmachine: Parsing certificate...
	I0721 17:35:44.395391    9358 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19312-1112/.minikube/certs/cert.pem
	I0721 17:35:44.395470    9358 main.go:141] libmachine: Decoding PEM data...
	I0721 17:35:44.395484    9358 main.go:141] libmachine: Parsing certificate...
	I0721 17:35:44.396385    9358 cli_runner.go:164] Run: docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0721 17:35:44.414964    9358 cli_runner.go:211] docker network inspect multinode-357000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0721 17:35:44.415064    9358 network_create.go:284] running [docker network inspect multinode-357000] to gather additional debugging logs...
	I0721 17:35:44.415082    9358 cli_runner.go:164] Run: docker network inspect multinode-357000
	W0721 17:35:44.432268    9358 cli_runner.go:211] docker network inspect multinode-357000 returned with exit code 1
	I0721 17:35:44.432302    9358 network_create.go:287] error running [docker network inspect multinode-357000]: docker network inspect multinode-357000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-357000 not found
	I0721 17:35:44.432313    9358 network_create.go:289] output of [docker network inspect multinode-357000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-357000 not found
	
	** /stderr **
	I0721 17:35:44.432440    9358 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0721 17:35:44.451384    9358 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:35:44.452987    9358 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:35:44.453319    9358 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014df840}
	I0721 17:35:44.453335    9358 network_create.go:124] attempt to create docker network multinode-357000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0721 17:35:44.453401    9358 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000
	W0721 17:35:44.470835    9358 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000 returned with exit code 1
	W0721 17:35:44.470865    9358 network_create.go:149] failed to create docker network multinode-357000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0721 17:35:44.470879    9358 network_create.go:116] failed to create docker network multinode-357000 192.168.67.0/24, will retry: subnet is taken
	I0721 17:35:44.472440    9358 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0721 17:35:44.472808    9358 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001816760}
	I0721 17:35:44.472821    9358 network_create.go:124] attempt to create docker network multinode-357000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0721 17:35:44.472903    9358 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-357000 multinode-357000
	I0721 17:35:44.536280    9358 network_create.go:108] docker network multinode-357000 192.168.76.0/24 created
	I0721 17:35:44.536325    9358 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-357000" container
	I0721 17:35:44.536420    9358 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0721 17:35:44.554515    9358 cli_runner.go:164] Run: docker volume create multinode-357000 --label name.minikube.sigs.k8s.io=multinode-357000 --label created_by.minikube.sigs.k8s.io=true
	I0721 17:35:44.571478    9358 oci.go:103] Successfully created a docker volume multinode-357000
	I0721 17:35:44.571594    9358 cli_runner.go:164] Run: docker run --rm --name multinode-357000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-357000 --entrypoint /usr/bin/test -v multinode-357000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -d /var/lib
	I0721 17:35:44.820115    9358 oci.go:107] Successfully prepared a docker volume multinode-357000
	I0721 17:35:44.820171    9358 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0721 17:35:44.820191    9358 kic.go:194] Starting extracting preloaded images to volume ...
	I0721 17:35:44.820302    9358 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-357000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-357000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-357000
helpers_test.go:235: (dbg) docker inspect multinode-357000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-357000",
	        "Id": "94bbfe770e89cc506bd448318b53377aa84f464266213ae5c22d1cc3d347cda1",
	        "Created": "2024-07-22T00:35:44.488234148Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-357000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-357000 -n multinode-357000: exit status 7 (73.730588ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:37:09.482004    9428 status.go:249] status error: host: state: unknown state "multinode-357000": docker container inspect multinode-357000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-357000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-357000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (108.38s)

                                                
                                    
x
+
TestScheduledStopUnix (300.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-362000 --memory=2048 --driver=docker 
E0721 17:39:54.470372    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 17:40:37.758526    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 17:44:14.702557    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-362000 --memory=2048 --driver=docker : signal: killed (5m0.002528024s)

                                                
                                                
-- stdout --
	* [scheduled-stop-362000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-362000" primary control-plane node in "scheduled-stop-362000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-362000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-362000" primary control-plane node in "scheduled-stop-362000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-21 17:44:23.911607 -0700 PDT m=+4822.723566577
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-362000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-362000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-362000",
	        "Id": "a299b78fd70ff8d8945fcfbe19ef83b31c2d8f0a41f73c7c13ccef346053948a",
	        "Created": "2024-07-22T00:39:25.387283895Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-362000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-362000 -n scheduled-stop-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-362000 -n scheduled-stop-362000: exit status 7 (78.390998ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:44:24.012021    9788 status.go:249] status error: host: state: unknown state "scheduled-stop-362000": docker container inspect scheduled-stop-362000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-362000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-362000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-362000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-362000
--- FAIL: TestScheduledStopUnix (300.54s)

                                                
                                    
x
+
TestSkaffold (300.55s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe4259304916 version
skaffold_test.go:59: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe4259304916 version: (1.720901602s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-587000 --memory=2600 --driver=docker 
E0721 17:44:54.468758    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 17:46:17.518162    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 17:49:14.701134    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-587000 --memory=2600 --driver=docker : signal: killed (4m57.320945467s)

                                                
                                                
-- stdout --
	* [skaffold-587000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-587000" primary control-plane node in "skaffold-587000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-587000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-587000" primary control-plane node in "skaffold-587000" cluster
	* Pulling base image v0.0.44-1721324606-19298 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-07-21 17:49:24.456862 -0700 PDT m=+5123.271290903
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-587000
helpers_test.go:235: (dbg) docker inspect skaffold-587000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-587000",
	        "Id": "f38c85274428f90d1987a146a10e209c3bd8e674d24363af6f5d01916f6ce95b",
	        "Created": "2024-07-22T00:44:28.62561885Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-587000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-587000 -n skaffold-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-587000 -n skaffold-587000: exit status 7 (74.677215ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0721 17:49:24.552510    9886 status.go:249] status error: host: state: unknown state "skaffold-587000": docker container inspect skaffold-587000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-587000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-587000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-587000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-587000
--- FAIL: TestSkaffold (300.55s)

                                                
                                    
x
+
TestInsufficientStorage (300.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-493000 --memory=2048 --output=json --wait=true --driver=docker 
E0721 17:49:54.464270    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 17:54:14.698656    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-493000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.00513234s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e9ad3301-bb35-4545-b45b-244ce9a17d87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-493000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"468cb926-6e10-4c05-83e4-43582281ff51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"f2647f9b-c05c-4f51-83bf-d7bff8bd220a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig"}}
	{"specversion":"1.0","id":"46802260-7ba8-47ef-b18d-41dab4707166","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"cb7fdf55-34ba-4b1d-a520-f5c5a8b646ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"83b42462-8c94-41c1-86c8-e71da47e62fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube"}}
	{"specversion":"1.0","id":"bc1573c0-08f7-4ed6-b4d8-0ab39edf585c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"18ff9fbd-5554-4882-97a3-193c79eae0ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b9df00dc-36bb-493c-bb99-db0e7264c71d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2dc640ff-a200-4a4b-8491-8b6bf783fc16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"040d16fa-ec33-4e11-b9fc-6c81414c3eeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"e367dfff-82a2-4e2e-99a8-204e878791f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-493000\" primary control-plane node in \"insufficient-storage-493000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b36f1678-7525-471e-8ce9-aa53ecedc99b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721324606-19298 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"34c4b3ab-81b2-48fc-b065-2d4cfb395e1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-493000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-493000 --output=json --layout=cluster: context deadline exceeded (679ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-493000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-493000
--- FAIL: TestInsufficientStorage (300.45s)

                                                
                                    

Test pass (169/210)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.63
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.34
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.30.3/json-events 4.63
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.3
18 TestDownloadOnly/v1.30.3/DeleteAll 0.34
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-beta.0/json-events 4.68
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.35
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.21
29 TestDownloadOnlyKic 1.54
30 TestBinaryMirror 1.37
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.22
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 282.23
40 TestAddons/parallel/InspektorGadget 10.63
41 TestAddons/parallel/MetricsServer 5.73
42 TestAddons/parallel/HelmTiller 11.63
44 TestAddons/parallel/CSI 54.72
45 TestAddons/parallel/Headlamp 12.02
46 TestAddons/parallel/CloudSpanner 5.56
47 TestAddons/parallel/LocalPath 55.17
48 TestAddons/parallel/NvidiaDevicePlugin 5.49
49 TestAddons/parallel/Yakd 5.01
50 TestAddons/parallel/Volcano 40.73
53 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestAddons/StoppedEnableDisable 11.39
62 TestHyperKitDriverInstallOrUpdate 7.46
65 TestErrorSpam/setup 21.72
66 TestErrorSpam/start 2.18
67 TestErrorSpam/status 0.8
68 TestErrorSpam/pause 1.43
69 TestErrorSpam/unpause 1.49
70 TestErrorSpam/stop 11.15
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 38.09
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 33.89
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 10.69
82 TestFunctional/serial/CacheCmd/cache/add_local 1.41
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
86 TestFunctional/serial/CacheCmd/cache/cache_reload 3.12
87 TestFunctional/serial/CacheCmd/cache/delete 0.16
88 TestFunctional/serial/MinikubeKubectlCmd 1.17
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.56
90 TestFunctional/serial/ExtraConfig 38.35
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 2.91
93 TestFunctional/serial/LogsFileCmd 2.91
94 TestFunctional/serial/InvalidService 4.03
96 TestFunctional/parallel/ConfigCmd 0.54
97 TestFunctional/parallel/DashboardCmd 13.09
98 TestFunctional/parallel/DryRun 1.13
99 TestFunctional/parallel/InternationalLanguage 0.58
100 TestFunctional/parallel/StatusCmd 0.79
105 TestFunctional/parallel/AddonsCmd 0.24
106 TestFunctional/parallel/PersistentVolumeClaim 30.47
108 TestFunctional/parallel/SSHCmd 0.48
109 TestFunctional/parallel/CpCmd 1.52
110 TestFunctional/parallel/MySQL 26.3
111 TestFunctional/parallel/FileSync 0.27
112 TestFunctional/parallel/CertSync 1.62
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.28
120 TestFunctional/parallel/License 0.43
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 23.22
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
132 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
133 TestFunctional/parallel/ServiceCmd/List 0.87
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.87
135 TestFunctional/parallel/ServiceCmd/HTTPS 15
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
137 TestFunctional/parallel/ProfileCmd/profile_list 0.35
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
139 TestFunctional/parallel/MountCmd/any-port 11.75
140 TestFunctional/parallel/ServiceCmd/Format 15
141 TestFunctional/parallel/MountCmd/specific-port 1.58
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.63
143 TestFunctional/parallel/ServiceCmd/URL 15
144 TestFunctional/parallel/Version/short 0.13
145 TestFunctional/parallel/Version/components 0.59
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
150 TestFunctional/parallel/ImageCommands/ImageBuild 5.61
151 TestFunctional/parallel/ImageCommands/Setup 1.71
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.06
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.63
155 TestFunctional/parallel/DockerEnv/bash 0.96
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
157 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
158 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.27
159 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
160 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
161 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
162 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 107.52
170 TestMultiControlPlane/serial/DeployApp 9.88
171 TestMultiControlPlane/serial/PingHostFromPods 1.37
172 TestMultiControlPlane/serial/AddWorkerNode 21.73
173 TestMultiControlPlane/serial/NodeLabels 0.06
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.67
175 TestMultiControlPlane/serial/CopyFile 15.84
176 TestMultiControlPlane/serial/StopSecondaryNode 11.35
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.47
178 TestMultiControlPlane/serial/RestartSecondaryNode 23.7
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.77
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 216.71
181 TestMultiControlPlane/serial/DeleteSecondaryNode 10.5
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.47
183 TestMultiControlPlane/serial/StopCluster 32.56
184 TestMultiControlPlane/serial/RestartCluster 84.16
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.47
186 TestMultiControlPlane/serial/AddSecondaryNode 35.13
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.66
190 TestImageBuild/serial/Setup 21.37
191 TestImageBuild/serial/NormalBuild 3.97
192 TestImageBuild/serial/BuildWithBuildArg 1.44
193 TestImageBuild/serial/BuildWithDockerIgnore 1.13
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.21
198 TestJSONOutput/start/Command 74.93
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.46
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.48
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 5.6
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.57
223 TestKicCustomNetwork/create_custom_network 23.34
224 TestKicCustomNetwork/use_default_bridge_network 23.3
225 TestKicExistingNetwork 22.57
226 TestKicCustomSubnet 22.54
227 TestKicStaticIP 23.59
228 TestMainNoArgs 0.08
229 TestMinikubeProfile 48.71
232 TestMountStart/serial/StartWithMountFirst 7.49
252 TestPreload 133.9
273 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 9.77
274 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.13
x
+
TestDownloadOnly/v1.20.0/json-events (15.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-638000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-638000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (15.632722057s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (15.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-638000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-638000: exit status 85 (291.673776ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-638000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT |          |
	|         | -p download-only-638000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 16:24:01
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 16:24:01.105751    2045 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:24:01.105948    2045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:24:01.105953    2045 out.go:304] Setting ErrFile to fd 2...
	I0721 16:24:01.105957    2045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:24:01.106127    2045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	W0721 16:24:01.106224    2045 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19312-1112/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19312-1112/.minikube/config/config.json: no such file or directory
	I0721 16:24:01.107890    2045 out.go:298] Setting JSON to true
	I0721 16:24:01.133525    2045 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1411,"bootTime":1721602830,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0721 16:24:01.133608    2045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:24:01.159702    2045 out.go:97] [download-only-638000] minikube v1.33.1 on Darwin 14.5
	I0721 16:24:01.159951    2045 notify.go:220] Checking for updates...
	W0721 16:24:01.160035    2045 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball: no such file or directory
	I0721 16:24:01.181403    2045 out.go:169] MINIKUBE_LOCATION=19312
	I0721 16:24:01.203600    2045 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	I0721 16:24:01.225764    2045 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0721 16:24:01.246597    2045 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:24:01.267780    2045 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	W0721 16:24:01.311630    2045 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 16:24:01.312163    2045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:24:01.341681    2045 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0721 16:24:01.341827    2045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 16:24:01.424172    2045 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:57 SystemTime:2024-07-21 23:24:01.411996436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 16:24:01.445624    2045 out.go:97] Using the docker driver based on user configuration
	I0721 16:24:01.445649    2045 start.go:297] selected driver: docker
	I0721 16:24:01.445660    2045 start.go:901] validating driver "docker" against <nil>
	I0721 16:24:01.445795    2045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 16:24:01.531666    2045 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:57 SystemTime:2024-07-21 23:24:01.519229979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 16:24:01.531860    2045 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 16:24:01.536178    2045 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0721 16:24:01.536339    2045 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 16:24:01.557908    2045 out.go:169] Using Docker Desktop driver with root privileges
	I0721 16:24:01.579632    2045 cni.go:84] Creating CNI manager for ""
	I0721 16:24:01.579699    2045 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0721 16:24:01.579837    2045 start.go:340] cluster config:
	{Name:download-only-638000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 16:24:01.601660    2045 out.go:97] Starting "download-only-638000" primary control-plane node in "download-only-638000" cluster
	I0721 16:24:01.601703    2045 cache.go:121] Beginning downloading kic base image for docker with docker
	I0721 16:24:01.622797    2045 out.go:97] Pulling base image v0.0.44-1721324606-19298 ...
	I0721 16:24:01.622889    2045 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 16:24:01.622985    2045 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local docker daemon
	I0721 16:24:01.641616    2045 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0721 16:24:01.641877    2045 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f in local cache directory
	I0721 16:24:01.642021    2045 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f to local cache
	I0721 16:24:01.677855    2045 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0721 16:24:01.677913    2045 cache.go:56] Caching tarball of preloaded images
	I0721 16:24:01.678207    2045 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 16:24:01.699505    2045 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0721 16:24:01.699518    2045 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0721 16:24:01.783621    2045 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0721 16:24:08.148669    2045 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0721 16:24:08.148863    2045 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0721 16:24:08.698581    2045 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0721 16:24:08.698808    2045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/download-only-638000/config.json ...
	I0721 16:24:08.698833    2045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/download-only-638000/config.json: {Name:mkd8e11cbac1b59621222c6b09039069b1668377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0721 16:24:08.699126    2045 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0721 16:24:08.699419    2045 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19312-1112/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-638000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-638000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-638000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (4.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-185000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-185000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker : (4.632114421s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (4.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-185000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-185000: exit status 85 (295.178686ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-638000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT |                     |
	|         | -p download-only-638000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT | 21 Jul 24 16:24 PDT |
	| delete  | -p download-only-638000        | download-only-638000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT | 21 Jul 24 16:24 PDT |
	| start   | -o=json --download-only        | download-only-185000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT |                     |
	|         | -p download-only-185000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 16:24:17
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 16:24:17.585604    2097 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:24:17.585777    2097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:24:17.585782    2097 out.go:304] Setting ErrFile to fd 2...
	I0721 16:24:17.585786    2097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:24:17.585954    2097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 16:24:17.587302    2097 out.go:298] Setting JSON to true
	I0721 16:24:17.611126    2097 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1427,"bootTime":1721602830,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0721 16:24:17.611218    2097 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:24:17.633177    2097 out.go:97] [download-only-185000] minikube v1.33.1 on Darwin 14.5
	I0721 16:24:17.633399    2097 notify.go:220] Checking for updates...
	I0721 16:24:17.655118    2097 out.go:169] MINIKUBE_LOCATION=19312
	I0721 16:24:17.676054    2097 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	I0721 16:24:17.697359    2097 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0721 16:24:17.718261    2097 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:24:17.739238    2097 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	W0721 16:24:17.782077    2097 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 16:24:17.782592    2097 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:24:17.810868    2097 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0721 16:24:17.811008    2097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 16:24:17.893894    2097 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:57 SystemTime:2024-07-21 23:24:17.885309632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 16:24:17.915206    2097 out.go:97] Using the docker driver based on user configuration
	I0721 16:24:17.915259    2097 start.go:297] selected driver: docker
	I0721 16:24:17.915275    2097 start.go:901] validating driver "docker" against <nil>
	I0721 16:24:17.915472    2097 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 16:24:18.001133    2097 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:57 SystemTime:2024-07-21 23:24:17.989789909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 16:24:18.001328    2097 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 16:24:18.004074    2097 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0721 16:24:18.004215    2097 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 16:24:18.025239    2097 out.go:169] Using Docker Desktop driver with root privileges
	
	
	* The control-plane node download-only-185000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-185000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-185000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (4.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-245000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-245000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker : (4.68130215s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (4.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-245000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-245000: exit status 85 (290.008704ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-638000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT |                     |
	|         | -p download-only-638000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT | 21 Jul 24 16:24 PDT |
	| delete  | -p download-only-638000             | download-only-638000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT | 21 Jul 24 16:24 PDT |
	| start   | -o=json --download-only             | download-only-185000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT |                     |
	|         | -p download-only-185000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT | 21 Jul 24 16:24 PDT |
	| delete  | -p download-only-185000             | download-only-185000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT | 21 Jul 24 16:24 PDT |
	| start   | -o=json --download-only             | download-only-245000 | jenkins | v1.33.1 | 21 Jul 24 16:24 PDT |                     |
	|         | -p download-only-245000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/21 16:24:23
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0721 16:24:23.060827    2146 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:24:23.061013    2146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:24:23.061018    2146 out.go:304] Setting ErrFile to fd 2...
	I0721 16:24:23.061022    2146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:24:23.061195    2146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 16:24:23.062561    2146 out.go:298] Setting JSON to true
	I0721 16:24:23.085128    2146 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1433,"bootTime":1721602830,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0721 16:24:23.085212    2146 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:24:23.105761    2146 out.go:97] [download-only-245000] minikube v1.33.1 on Darwin 14.5
	I0721 16:24:23.105919    2146 notify.go:220] Checking for updates...
	I0721 16:24:23.127082    2146 out.go:169] MINIKUBE_LOCATION=19312
	I0721 16:24:23.147765    2146 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	I0721 16:24:23.168832    2146 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0721 16:24:23.189849    2146 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:24:23.211194    2146 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	W0721 16:24:23.253968    2146 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0721 16:24:23.254483    2146 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:24:23.279973    2146 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0721 16:24:23.280134    2146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 16:24:23.361015    2146 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:57 SystemTime:2024-07-21 23:24:23.352578186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 16:24:23.381667    2146 out.go:97] Using the docker driver based on user configuration
	I0721 16:24:23.381706    2146 start.go:297] selected driver: docker
	I0721 16:24:23.381720    2146 start.go:901] validating driver "docker" against <nil>
	I0721 16:24:23.381906    2146 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 16:24:23.463769    2146 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:57 SystemTime:2024-07-21 23:24:23.455681621 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 16:24:23.463934    2146 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0721 16:24:23.466727    2146 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0721 16:24:23.466872    2146 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0721 16:24:23.487776    2146 out.go:169] Using Docker Desktop driver with root privileges
	
	
	* The control-plane node download-only-245000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-245000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-245000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-181000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-181000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-181000
--- PASS: TestDownloadOnlyKic (1.54s)

                                                
                                    
x
+
TestBinaryMirror (1.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-885000 --alsologtostderr --binary-mirror http://127.0.0.1:49352 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-885000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-885000
--- PASS: TestBinaryMirror (1.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-860000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-860000: exit status 85 (220.361573ms)

                                                
                                                
-- stdout --
	* Profile "addons-860000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-860000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-860000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-860000: exit status 85 (192.561529ms)

                                                
                                                
-- stdout --
	* Profile "addons-860000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-860000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (282.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-860000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-860000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m42.232428474s)
--- PASS: TestAddons/Setup (282.23s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.63s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-k4d6h" [d8d2b486-dfff-4c7b-b208-e9900966a0b8] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004645021s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-860000
addons_test.go:843: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-860000: (5.625835155s)
--- PASS: TestAddons/parallel/InspektorGadget (10.63s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.65033ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-9slcv" [6f5ebce7-f632-483f-b803-67a90d132cbd] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005284549s
addons_test.go:417: (dbg) Run:  kubectl --context addons-860000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-860000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.73s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.63s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.122265ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-t8v48" [d25d130e-58b1-4e63-a8db-36b6ab217aae] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.003813489s
addons_test.go:475: (dbg) Run:  kubectl --context addons-860000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-860000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.083970852s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-860000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.63s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 5.361814ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-860000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-860000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8aa49d63-9508-4bca-91eb-2fe1a23d923a] Pending
helpers_test.go:344: "task-pv-pod" [8aa49d63-9508-4bca-91eb-2fe1a23d923a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8aa49d63-9508-4bca-91eb-2fe1a23d923a] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003869549s
addons_test.go:586: (dbg) Run:  kubectl --context addons-860000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-860000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-860000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-860000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-860000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-860000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-860000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f46e622a-f501-4a1e-886e-08a986cdcece] Pending
helpers_test.go:344: "task-pv-pod-restore" [f46e622a-f501-4a1e-886e-08a986cdcece] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f46e622a-f501-4a1e-886e-08a986cdcece] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005838345s
addons_test.go:628: (dbg) Run:  kubectl --context addons-860000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-860000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-860000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-amd64 -p addons-860000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-amd64 -p addons-860000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.499227078s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-860000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.72s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-860000 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-860000 --alsologtostderr -v=1: (1.015418269s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-5wstk" [e08cd9c4-0a38-408d-95f3-3d8dc68be022] Pending
helpers_test.go:344: "headlamp-7867546754-5wstk" [e08cd9c4-0a38-408d-95f3-3d8dc68be022] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-5wstk" [e08cd9c4-0a38-408d-95f3-3d8dc68be022] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004681593s
--- PASS: TestAddons/parallel/Headlamp (12.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-49t4p" [4cd12f05-4f5f-4df7-ac7e-9b72e61fd8d6] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003688297s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-860000
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-860000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-860000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3ff39425-89a9-41b7-98a7-41f1382fea7a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3ff39425-89a9-41b7-98a7-41f1382fea7a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3ff39425-89a9-41b7-98a7-41f1382fea7a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003104936s
addons_test.go:992: (dbg) Run:  kubectl --context addons-860000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-amd64 -p addons-860000 ssh "cat /opt/local-path-provisioner/pvc-bdf76761-49db-4550-bc6e-426f04c508aa_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-860000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-860000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-amd64 -p addons-860000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-amd64 -p addons-860000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (37.35560336s)
--- PASS: TestAddons/parallel/LocalPath (55.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fhf44" [ccd1c3b1-423f-4055-bc69-a0ef911b74dd] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006789967s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-860000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-wbr5v" [b1d8e05b-5e08-4933-b99a-a9e0aa1e5f13] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005124214s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (40.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 1.845737ms
addons_test.go:897: volcano-admission stabilized in 2.253506ms
addons_test.go:889: volcano-scheduler stabilized in 2.345804ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-rzph8" [e1e31aaa-05f6-4ae2-bb18-f1efa3c66069] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.003491524s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-9s8qr" [661eb0d8-711c-4c2c-a836-ef379c3e3a79] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.006053491s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-mw4cc" [b6ff13d0-da4d-4f94-a8d4-8c66c8992e31] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.003886307s
addons_test.go:924: (dbg) Run:  kubectl --context addons-860000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-860000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-860000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b4316e4d-fbeb-4c5d-a0af-c8ec5251bee8] Pending
helpers_test.go:344: "test-job-nginx-0" [b4316e4d-fbeb-4c5d-a0af-c8ec5251bee8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [b4316e4d-fbeb-4c5d-a0af-c8ec5251bee8] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 15.004064271s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-amd64 -p addons-860000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-amd64 -p addons-860000 addons disable volcano --alsologtostderr -v=1: (10.442722312s)
--- PASS: TestAddons/parallel/Volcano (40.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-860000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-860000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-860000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-860000: (10.843392151s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-860000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-860000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-860000
--- PASS: TestAddons/StoppedEnableDisable (11.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.46s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
E0721 17:54:54.463803    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
--- PASS: TestHyperKitDriverInstallOrUpdate (7.46s)

                                                
                                    
x
+
TestErrorSpam/setup (21.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-626000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-626000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 --driver=docker : (21.723281382s)
--- PASS: TestErrorSpam/setup (21.72s)

                                                
                                    
x
+
TestErrorSpam/start (2.18s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 start --dry-run
--- PASS: TestErrorSpam/start (2.18s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 pause
--- PASS: TestErrorSpam/pause (1.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (11.15s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 stop: (10.660240788s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-626000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-626000 stop
--- PASS: TestErrorSpam/stop (11.15s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19312-1112/.minikube/files/etc/test/nested/copy/2043/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-307000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-307000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (38.093224917s)
--- PASS: TestFunctional/serial/StartWithProxy (38.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-307000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-307000 --alsologtostderr -v=8: (33.888092559s)
functional_test.go:659: soft start took 33.888625096s for "functional-307000" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-307000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 cache add registry.k8s.io/pause:3.1: (3.945156678s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 cache add registry.k8s.io/pause:3.3: (3.997868004s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 cache add registry.k8s.io/pause:latest: (2.741953007s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-307000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2210426262/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cache add minikube-local-cache-test:functional-307000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cache delete minikube-local-cache-test:functional-307000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-307000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (245.143548ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 cache reload: (2.352038538s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 kubectl -- --context functional-307000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 kubectl -- --context functional-307000 get pods: (1.174311005s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-307000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-307000 get pods: (1.561052542s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.56s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-307000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0721 16:34:14.528273    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:34:14.534207    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:34:14.545142    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:34:14.565858    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:34:14.606263    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:34:14.688142    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:34:14.849228    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:34:15.170162    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:34:15.811163    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:34:17.091498    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:34:19.651792    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:34:24.772003    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
E0721 16:34:35.012326    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-307000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.351860309s)
functional_test.go:757: restart took 38.351986942s for "functional-307000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-307000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 logs: (2.907203571s)
--- PASS: TestFunctional/serial/LogsCmd (2.91s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd4180082041/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd4180082041/001/logs.txt: (2.9106456s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-307000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-307000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-307000: exit status 115 (385.220755ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31382 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-307000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 config get cpus: exit status 14 (63.071294ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 config get cpus: exit status 14 (61.769216ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-307000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-307000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3971: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-307000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-307000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (555.267441ms)

                                                
                                                
-- stdout --
	* [functional-307000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:36:09.372161    3919 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:36:09.372426    3919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:36:09.372432    3919 out.go:304] Setting ErrFile to fd 2...
	I0721 16:36:09.372435    3919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:36:09.372607    3919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 16:36:09.374020    3919 out.go:298] Setting JSON to false
	I0721 16:36:09.396708    3919 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2139,"bootTime":1721602830,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0721 16:36:09.396835    3919 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:36:09.419212    3919 out.go:177] * [functional-307000] minikube v1.33.1 on Darwin 14.5
	I0721 16:36:09.460986    3919 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 16:36:09.461004    3919 notify.go:220] Checking for updates...
	I0721 16:36:09.502993    3919 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	I0721 16:36:09.523827    3919 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0721 16:36:09.545007    3919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:36:09.566061    3919 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	I0721 16:36:09.586886    3919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 16:36:09.608479    3919 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 16:36:09.609026    3919 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:36:09.632358    3919 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0721 16:36:09.632528    3919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 16:36:09.718548    3919 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-21 23:36:09.709125626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 16:36:09.740663    3919 out.go:177] * Using the docker driver based on existing profile
	I0721 16:36:09.761375    3919 start.go:297] selected driver: docker
	I0721 16:36:09.761406    3919 start.go:901] validating driver "docker" against &{Name:functional-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-307000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 16:36:09.761549    3919 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 16:36:09.786701    3919 out.go:177] 
	W0721 16:36:09.808562    3919 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0721 16:36:09.830334    3919 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-307000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-307000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-307000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (582.262436ms)

                                                
                                                
-- stdout --
	* [functional-307000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19312
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:36:08.785229    3903 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:36:08.785388    3903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:36:08.785393    3903 out.go:304] Setting ErrFile to fd 2...
	I0721 16:36:08.785397    3903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:36:08.785568    3903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 16:36:08.787216    3903 out.go:298] Setting JSON to false
	I0721 16:36:08.811391    3903 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2138,"bootTime":1721602830,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0721 16:36:08.811491    3903 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0721 16:36:08.834140    3903 out.go:177] * [functional-307000] minikube v1.33.1 sur Darwin 14.5
	I0721 16:36:08.876822    3903 out.go:177]   - MINIKUBE_LOCATION=19312
	I0721 16:36:08.876956    3903 notify.go:220] Checking for updates...
	I0721 16:36:08.919460    3903 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
	I0721 16:36:08.940629    3903 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0721 16:36:08.961726    3903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0721 16:36:08.982631    3903 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube
	I0721 16:36:09.003864    3903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0721 16:36:09.024974    3903 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 16:36:09.025355    3903 driver.go:392] Setting default libvirt URI to qemu:///system
	I0721 16:36:09.048849    3903 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0721 16:36:09.049006    3903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0721 16:36:09.132406    3903 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-21 23:36:09.123782184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0721 16:36:09.154013    3903 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0721 16:36:09.175221    3903 start.go:297] selected driver: docker
	I0721 16:36:09.175252    3903 start.go:901] validating driver "docker" against &{Name:functional-307000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-307000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0721 16:36:09.175384    3903 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0721 16:36:09.201143    3903 out.go:177] 
	W0721 16:36:09.237991    3903 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0721 16:36:09.259211    3903 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f903c3a0-8d17-472e-8c57-862efdef4421] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004033228s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-307000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-307000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-307000 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-307000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-307000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [64ac5a71-66f3-408f-a328-1eca24f3ebaa] Pending
helpers_test.go:344: "sp-pod" [64ac5a71-66f3-408f-a328-1eca24f3ebaa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [64ac5a71-66f3-408f-a328-1eca24f3ebaa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005962385s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-307000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-307000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-307000 delete -f testdata/storage-provisioner/pod.yaml: (1.019924703s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-307000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [10658a48-453b-48fa-b3d1-4fb0af4d0435] Pending
helpers_test.go:344: "sp-pod" [10658a48-453b-48fa-b3d1-4fb0af4d0435] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [10658a48-453b-48fa-b3d1-4fb0af4d0435] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005421004s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-307000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh -n functional-307000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cp functional-307000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd3398526151/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh -n functional-307000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
E0721 16:34:55.493256    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh -n functional-307000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-307000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-w9ntp" [ce2cf2f5-66ad-4352-989d-d8d5aa26cfcd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-w9ntp" [ce2cf2f5-66ad-4352-989d-d8d5aa26cfcd] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.004096804s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-307000 exec mysql-64454c8b5c-w9ntp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-307000 exec mysql-64454c8b5c-w9ntp -- mysql -ppassword -e "show databases;": exit status 1 (140.04316ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-307000 exec mysql-64454c8b5c-w9ntp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-307000 exec mysql-64454c8b5c-w9ntp -- mysql -ppassword -e "show databases;": exit status 1 (109.478245ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-307000 exec mysql-64454c8b5c-w9ntp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2043/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /etc/test/nested/copy/2043/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2043.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /etc/ssl/certs/2043.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2043.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /usr/share/ca-certificates/2043.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/20432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /etc/ssl/certs/20432.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/20432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /usr/share/ca-certificates/20432.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-307000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "sudo systemctl is-active crio": exit status 1 (279.092202ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-307000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-307000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-307000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-307000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3586: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-307000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-307000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b34c9e9e-d289-467c-a5d4-a69784bfb9dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b34c9e9e-d289-467c-a5d4-a69784bfb9dd] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 23.006820983s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-307000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-307000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3605: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-307000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-307000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-7zqxj" [03d31eed-9035-4a18-a611-9bf27c535016] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-7zqxj" [03d31eed-9035-4a18-a611-9bf27c535016] Running
E0721 16:35:36.454519    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004130089s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 service list -o json
functional_test.go:1490: Took "867.347495ms" to run "out/minikube-darwin-amd64 -p functional-307000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 service --namespace=default --https --url hello-node: signal: killed (15.002794135s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50410

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:50410
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "276.825151ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "77.90292ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "325.96673ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "78.539314ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1095573269/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721604951984775000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1095573269/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721604951984775000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1095573269/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721604951984775000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1095573269/001/test-1721604951984775000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (239.147657ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 21 23:35 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 21 23:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 21 23:35 test-1721604951984775000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh cat /mount-9p/test-1721604951984775000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-307000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [56dec93a-7bd2-41d7-9eea-2ff4f084cb84] Pending
helpers_test.go:344: "busybox-mount" [56dec93a-7bd2-41d7-9eea-2ff4f084cb84] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [56dec93a-7bd2-41d7-9eea-2ff4f084cb84] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [56dec93a-7bd2-41d7-9eea-2ff4f084cb84] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.005803096s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-307000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1095573269/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 service hello-node --url --format={{.IP}}: signal: killed (15.002220354s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port660148315/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.390792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port660148315/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "sudo umount -f /mount-9p": exit status 1 (223.757092ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-307000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port660148315/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2645248591/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2645248591/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2645248591/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T" /mount1: exit status 1 (313.390196ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T" /mount1: exit status 1 (635.983521ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-307000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2645248591/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2645248591/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-307000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2645248591/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 service hello-node --url
2024/07/21 16:36:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 service hello-node --url: signal: killed (15.002376889s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50524

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:50524
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-307000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-307000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-307000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-307000 image ls --format short --alsologtostderr:
I0721 16:36:32.124125    4188 out.go:291] Setting OutFile to fd 1 ...
I0721 16:36:32.138553    4188 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:32.138568    4188 out.go:304] Setting ErrFile to fd 2...
I0721 16:36:32.138576    4188 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:32.138935    4188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
I0721 16:36:32.141721    4188 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:32.141858    4188 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:32.142387    4188 cli_runner.go:164] Run: docker container inspect functional-307000 --format={{.State.Status}}
I0721 16:36:32.191967    4188 ssh_runner.go:195] Run: systemctl --version
I0721 16:36:32.192095    4188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307000
I0721 16:36:32.211823    4188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50142 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1112/.minikube/machines/functional-307000/id_rsa Username:docker}
I0721 16:36:32.295215    4188 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-307000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kicbase/echo-server               | functional-307000 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 099a2d701db1f | 43.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| docker.io/library/minikube-local-cache-test | functional-307000 | 8b0043c208641 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-307000 image ls --format table --alsologtostderr:
I0721 16:36:32.486467    4198 out.go:291] Setting OutFile to fd 1 ...
I0721 16:36:32.486758    4198 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:32.486765    4198 out.go:304] Setting ErrFile to fd 2...
I0721 16:36:32.486769    4198 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:32.486943    4198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
I0721 16:36:32.487610    4198 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:32.487706    4198 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:32.488101    4198 cli_runner.go:164] Run: docker container inspect functional-307000 --format={{.State.Status}}
I0721 16:36:32.507354    4198 ssh_runner.go:195] Run: systemctl --version
I0721 16:36:32.507427    4198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307000
I0721 16:36:32.526288    4198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50142 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1112/.minikube/machines/functional-307000/id_rsa Username:docker}
I0721 16:36:32.610116    4198 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-307000 image ls --format json --alsologtostderr:
[{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"8b0043c208641ffddda195d95b73ee28fe22431ddcb793e0eb627917b341c859","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-307000"],"size":"30"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"fffffc90d343cbcb01a5032edac86db5998
c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/m
etrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-307000"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"siz
e":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-307000 image ls --format json --alsologtostderr:
I0721 16:36:32.249329    4191 out.go:291] Setting OutFile to fd 1 ...
I0721 16:36:32.249606    4191 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:32.249612    4191 out.go:304] Setting ErrFile to fd 2...
I0721 16:36:32.249616    4191 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:32.249824    4191 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
I0721 16:36:32.250467    4191 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:32.250563    4191 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:32.250960    4191 cli_runner.go:164] Run: docker container inspect functional-307000 --format={{.State.Status}}
I0721 16:36:32.269797    4191 ssh_runner.go:195] Run: systemctl --version
I0721 16:36:32.269877    4191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307000
I0721 16:36:32.288439    4191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50142 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1112/.minikube/machines/functional-307000/id_rsa Username:docker}
I0721 16:36:32.373390    4191 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-307000 image ls --format yaml --alsologtostderr:
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 8b0043c208641ffddda195d95b73ee28fe22431ddcb793e0eb627917b341c859
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-307000
size: "30"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-307000
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-307000 image ls --format yaml --alsologtostderr:
I0721 16:36:32.405386    4196 out.go:291] Setting OutFile to fd 1 ...
I0721 16:36:32.422411    4196 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:32.422426    4196 out.go:304] Setting ErrFile to fd 2...
I0721 16:36:32.422435    4196 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:32.422681    4196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
I0721 16:36:32.424747    4196 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:32.424905    4196 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:32.425449    4196 cli_runner.go:164] Run: docker container inspect functional-307000 --format={{.State.Status}}
I0721 16:36:32.447750    4196 ssh_runner.go:195] Run: systemctl --version
I0721 16:36:32.447825    4196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307000
I0721 16:36:32.469978    4196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50142 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1112/.minikube/machines/functional-307000/id_rsa Username:docker}
I0721 16:36:32.554280    4196 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-307000 ssh pgrep buildkitd: exit status 1 (225.272837ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image build -t localhost/my-image:functional-307000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-307000 image build -t localhost/my-image:functional-307000 testdata/build --alsologtostderr: (5.162323539s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-307000 image build -t localhost/my-image:functional-307000 testdata/build --alsologtostderr:
I0721 16:36:32.884196    4210 out.go:291] Setting OutFile to fd 1 ...
I0721 16:36:32.885054    4210 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:32.885061    4210 out.go:304] Setting ErrFile to fd 2...
I0721 16:36:32.885066    4210 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0721 16:36:32.885259    4210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
I0721 16:36:32.885898    4210 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:32.887183    4210 config.go:182] Loaded profile config "functional-307000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0721 16:36:32.887606    4210 cli_runner.go:164] Run: docker container inspect functional-307000 --format={{.State.Status}}
I0721 16:36:32.907442    4210 ssh_runner.go:195] Run: systemctl --version
I0721 16:36:32.907518    4210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-307000
I0721 16:36:32.926941    4210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50142 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1112/.minikube/machines/functional-307000/id_rsa Username:docker}
I0721 16:36:33.010232    4210 build_images.go:161] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.341046634.tar
I0721 16:36:33.010305    4210 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0721 16:36:33.019338    4210 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.341046634.tar
I0721 16:36:33.023528    4210 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.341046634.tar: stat -c "%s %y" /var/lib/minikube/build/build.341046634.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.341046634.tar': No such file or directory
I0721 16:36:33.023573    4210 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.341046634.tar --> /var/lib/minikube/build/build.341046634.tar (3072 bytes)
I0721 16:36:33.045377    4210 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.341046634
I0721 16:36:33.055033    4210 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.341046634 -xf /var/lib/minikube/build/build.341046634.tar
I0721 16:36:33.063861    4210 docker.go:360] Building image: /var/lib/minikube/build/build.341046634
I0721 16:36:33.063943    4210 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-307000 /var/lib/minikube/build/build.341046634
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 3.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:0cfadd27034a72f6bc874230c318a0bb77ec29ee2846681b2d7504688d21a69b done
#8 naming to localhost/my-image:functional-307000 done
#8 DONE 0.0s
I0721 16:36:37.947023    4210 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-307000 /var/lib/minikube/build/build.341046634: (4.883108848s)
I0721 16:36:37.947082    4210 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.341046634
I0721 16:36:37.956315    4210 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.341046634.tar
I0721 16:36:37.964509    4210 build_images.go:217] Built localhost/my-image:functional-307000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.341046634.tar
I0721 16:36:37.964540    4210 build_images.go:133] succeeded building to: functional-307000
I0721 16:36:37.964547    4210 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.687282614s)
functional_test.go:346: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-307000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image load --daemon kicbase/echo-server:functional-307000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image load --daemon kicbase/echo-server:functional-307000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-307000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image load --daemon kicbase/echo-server:functional-307000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-307000 docker-env) && out/minikube-darwin-amd64 status -p functional-307000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-307000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image save kicbase/echo-server:functional-307000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image rm kicbase/echo-server:functional-307000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi kicbase/echo-server:functional-307000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-307000 image save --daemon kicbase/echo-server:functional-307000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect kicbase/echo-server:functional-307000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-307000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-307000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-307000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (107.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-906000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0721 16:36:58.374352    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-906000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m46.84280094s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (107.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-906000 -- rollout status deployment/busybox: (7.431522891s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-2d2bn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-d427n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-m5j6w -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-2d2bn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-d427n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-m5j6w -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-2d2bn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-d427n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-m5j6w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-2d2bn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-2d2bn -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-d427n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-d427n -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-m5j6w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-906000 -- exec busybox-fc5497c4f-m5j6w -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-906000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-906000 -v=7 --alsologtostderr: (20.885068372s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-906000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp testdata/cp-test.txt ha-906000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1447195463/001/cp-test_ha-906000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000:/home/docker/cp-test.txt ha-906000-m02:/home/docker/cp-test_ha-906000_ha-906000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m02 "sudo cat /home/docker/cp-test_ha-906000_ha-906000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000:/home/docker/cp-test.txt ha-906000-m03:/home/docker/cp-test_ha-906000_ha-906000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m03 "sudo cat /home/docker/cp-test_ha-906000_ha-906000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000:/home/docker/cp-test.txt ha-906000-m04:/home/docker/cp-test_ha-906000_ha-906000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m04 "sudo cat /home/docker/cp-test_ha-906000_ha-906000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp testdata/cp-test.txt ha-906000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1447195463/001/cp-test_ha-906000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000-m02:/home/docker/cp-test.txt ha-906000:/home/docker/cp-test_ha-906000-m02_ha-906000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000 "sudo cat /home/docker/cp-test_ha-906000-m02_ha-906000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000-m02:/home/docker/cp-test.txt ha-906000-m03:/home/docker/cp-test_ha-906000-m02_ha-906000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m03 "sudo cat /home/docker/cp-test_ha-906000-m02_ha-906000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000-m02:/home/docker/cp-test.txt ha-906000-m04:/home/docker/cp-test_ha-906000-m02_ha-906000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m04 "sudo cat /home/docker/cp-test_ha-906000-m02_ha-906000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp testdata/cp-test.txt ha-906000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1447195463/001/cp-test_ha-906000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000-m03:/home/docker/cp-test.txt ha-906000:/home/docker/cp-test_ha-906000-m03_ha-906000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000 "sudo cat /home/docker/cp-test_ha-906000-m03_ha-906000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000-m03:/home/docker/cp-test.txt ha-906000-m02:/home/docker/cp-test_ha-906000-m03_ha-906000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m02 "sudo cat /home/docker/cp-test_ha-906000-m03_ha-906000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000-m03:/home/docker/cp-test.txt ha-906000-m04:/home/docker/cp-test_ha-906000-m03_ha-906000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m04 "sudo cat /home/docker/cp-test_ha-906000-m03_ha-906000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp testdata/cp-test.txt ha-906000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m04 "sudo cat /home/docker/cp-test.txt"
E0721 16:39:14.525753    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000-m04:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1447195463/001/cp-test_ha-906000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000-m04:/home/docker/cp-test.txt ha-906000:/home/docker/cp-test_ha-906000-m04_ha-906000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000 "sudo cat /home/docker/cp-test_ha-906000-m04_ha-906000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000-m04:/home/docker/cp-test.txt ha-906000-m02:/home/docker/cp-test_ha-906000-m04_ha-906000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m02 "sudo cat /home/docker/cp-test_ha-906000-m04_ha-906000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 cp ha-906000-m04:/home/docker/cp-test.txt ha-906000-m03:/home/docker/cp-test_ha-906000-m04_ha-906000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 ssh -n ha-906000-m03 "sudo cat /home/docker/cp-test_ha-906000-m04_ha-906000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-906000 node stop m02 -v=7 --alsologtostderr: (10.722904409s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (625.303295ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-906000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-906000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-906000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:39:28.797827    4999 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:39:28.798121    4999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:39:28.798126    4999 out.go:304] Setting ErrFile to fd 2...
	I0721 16:39:28.798130    4999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:39:28.798307    4999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 16:39:28.798485    4999 out.go:298] Setting JSON to false
	I0721 16:39:28.798507    4999 mustload.go:65] Loading cluster: ha-906000
	I0721 16:39:28.798542    4999 notify.go:220] Checking for updates...
	I0721 16:39:28.798801    4999 config.go:182] Loaded profile config "ha-906000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 16:39:28.798818    4999 status.go:255] checking status of ha-906000 ...
	I0721 16:39:28.799218    4999 cli_runner.go:164] Run: docker container inspect ha-906000 --format={{.State.Status}}
	I0721 16:39:28.817126    4999 status.go:330] ha-906000 host status = "Running" (err=<nil>)
	I0721 16:39:28.817161    4999 host.go:66] Checking if "ha-906000" exists ...
	I0721 16:39:28.817411    4999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-906000
	I0721 16:39:28.835288    4999 host.go:66] Checking if "ha-906000" exists ...
	I0721 16:39:28.835582    4999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 16:39:28.835648    4999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-906000
	I0721 16:39:28.853822    4999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50600 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1112/.minikube/machines/ha-906000/id_rsa Username:docker}
	I0721 16:39:28.938228    4999 ssh_runner.go:195] Run: systemctl --version
	I0721 16:39:28.942904    4999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 16:39:28.953547    4999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-906000
	I0721 16:39:28.972468    4999 kubeconfig.go:125] found "ha-906000" server: "https://127.0.0.1:50604"
	I0721 16:39:28.972500    4999 api_server.go:166] Checking apiserver status ...
	I0721 16:39:28.972539    4999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 16:39:28.983553    4999 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2383/cgroup
	W0721 16:39:28.992438    4999 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2383/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 16:39:28.992491    4999 ssh_runner.go:195] Run: ls
	I0721 16:39:28.996671    4999 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50604/healthz ...
	I0721 16:39:29.000459    4999 api_server.go:279] https://127.0.0.1:50604/healthz returned 200:
	ok
	I0721 16:39:29.000474    4999 status.go:422] ha-906000 apiserver status = Running (err=<nil>)
	I0721 16:39:29.000490    4999 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 16:39:29.000501    4999 status.go:255] checking status of ha-906000-m02 ...
	I0721 16:39:29.000762    4999 cli_runner.go:164] Run: docker container inspect ha-906000-m02 --format={{.State.Status}}
	I0721 16:39:29.019048    4999 status.go:330] ha-906000-m02 host status = "Stopped" (err=<nil>)
	I0721 16:39:29.019084    4999 status.go:343] host is not running, skipping remaining checks
	I0721 16:39:29.019096    4999 status.go:257] ha-906000-m02 status: &{Name:ha-906000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 16:39:29.019129    4999 status.go:255] checking status of ha-906000-m03 ...
	I0721 16:39:29.019448    4999 cli_runner.go:164] Run: docker container inspect ha-906000-m03 --format={{.State.Status}}
	I0721 16:39:29.038099    4999 status.go:330] ha-906000-m03 host status = "Running" (err=<nil>)
	I0721 16:39:29.038125    4999 host.go:66] Checking if "ha-906000-m03" exists ...
	I0721 16:39:29.038477    4999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-906000-m03
	I0721 16:39:29.056862    4999 host.go:66] Checking if "ha-906000-m03" exists ...
	I0721 16:39:29.057113    4999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 16:39:29.057165    4999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-906000-m03
	I0721 16:39:29.075377    4999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50710 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1112/.minikube/machines/ha-906000-m03/id_rsa Username:docker}
	I0721 16:39:29.157165    4999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 16:39:29.167775    4999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-906000
	I0721 16:39:29.187023    4999 kubeconfig.go:125] found "ha-906000" server: "https://127.0.0.1:50604"
	I0721 16:39:29.187045    4999 api_server.go:166] Checking apiserver status ...
	I0721 16:39:29.187087    4999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0721 16:39:29.197476    4999 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2286/cgroup
	W0721 16:39:29.207047    4999 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2286/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0721 16:39:29.207109    4999 ssh_runner.go:195] Run: ls
	I0721 16:39:29.212056    4999 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50604/healthz ...
	I0721 16:39:29.216798    4999 api_server.go:279] https://127.0.0.1:50604/healthz returned 200:
	ok
	I0721 16:39:29.216814    4999 status.go:422] ha-906000-m03 apiserver status = Running (err=<nil>)
	I0721 16:39:29.216823    4999 status.go:257] ha-906000-m03 status: &{Name:ha-906000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 16:39:29.216836    4999 status.go:255] checking status of ha-906000-m04 ...
	I0721 16:39:29.217121    4999 cli_runner.go:164] Run: docker container inspect ha-906000-m04 --format={{.State.Status}}
	I0721 16:39:29.236082    4999 status.go:330] ha-906000-m04 host status = "Running" (err=<nil>)
	I0721 16:39:29.236124    4999 host.go:66] Checking if "ha-906000-m04" exists ...
	I0721 16:39:29.236414    4999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-906000-m04
	I0721 16:39:29.254508    4999 host.go:66] Checking if "ha-906000-m04" exists ...
	I0721 16:39:29.254760    4999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0721 16:39:29.254810    4999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-906000-m04
	I0721 16:39:29.272809    4999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50831 SSHKeyPath:/Users/jenkins/minikube-integration/19312-1112/.minikube/machines/ha-906000-m04/id_rsa Username:docker}
	I0721 16:39:29.354863    4999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0721 16:39:29.366680    4999 status.go:257] ha-906000-m04 status: &{Name:ha-906000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 node start m02 -v=7 --alsologtostderr
E0721 16:39:42.214482    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-906000 node start m02 -v=7 --alsologtostderr: (22.215247782s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-906000 status -v=7 --alsologtostderr: (1.421984491s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
E0721 16:39:54.289833    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:39:54.294949    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (216.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-906000 -v=7 --alsologtostderr
E0721 16:39:54.305290    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:39:54.325983    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:39:54.367370    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-906000 -v=7 --alsologtostderr
E0721 16:39:54.448112    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:39:54.608537    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:39:54.928648    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:39:55.568924    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:39:56.851157    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:39:59.411265    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:40:04.531417    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:40:14.771712    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-906000 -v=7 --alsologtostderr: (33.905082143s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-906000 --wait=true -v=7 --alsologtostderr
E0721 16:40:35.251732    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:41:16.213738    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:42:38.133291    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-906000 --wait=true -v=7 --alsologtostderr: (3m2.673074515s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-906000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (216.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-906000 node delete m03 -v=7 --alsologtostderr: (9.753158214s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-906000 stop -v=7 --alsologtostderr: (32.452419347s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 status -v=7 --alsologtostderr
E0721 16:44:14.523186    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-906000 status -v=7 --alsologtostderr: exit status 7 (111.295635ms)

                                                
                                                
-- stdout --
	ha-906000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-906000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-906000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0721 16:44:14.492297    5465 out.go:291] Setting OutFile to fd 1 ...
	I0721 16:44:14.492575    5465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:44:14.492581    5465 out.go:304] Setting ErrFile to fd 2...
	I0721 16:44:14.492585    5465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0721 16:44:14.492760    5465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19312-1112/.minikube/bin
	I0721 16:44:14.492963    5465 out.go:298] Setting JSON to false
	I0721 16:44:14.492989    5465 mustload.go:65] Loading cluster: ha-906000
	I0721 16:44:14.493031    5465 notify.go:220] Checking for updates...
	I0721 16:44:14.493286    5465 config.go:182] Loaded profile config "ha-906000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0721 16:44:14.493301    5465 status.go:255] checking status of ha-906000 ...
	I0721 16:44:14.493676    5465 cli_runner.go:164] Run: docker container inspect ha-906000 --format={{.State.Status}}
	I0721 16:44:14.511399    5465 status.go:330] ha-906000 host status = "Stopped" (err=<nil>)
	I0721 16:44:14.511451    5465 status.go:343] host is not running, skipping remaining checks
	I0721 16:44:14.511464    5465 status.go:257] ha-906000 status: &{Name:ha-906000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 16:44:14.511495    5465 status.go:255] checking status of ha-906000-m02 ...
	I0721 16:44:14.511771    5465 cli_runner.go:164] Run: docker container inspect ha-906000-m02 --format={{.State.Status}}
	I0721 16:44:14.530018    5465 status.go:330] ha-906000-m02 host status = "Stopped" (err=<nil>)
	I0721 16:44:14.530041    5465 status.go:343] host is not running, skipping remaining checks
	I0721 16:44:14.530048    5465 status.go:257] ha-906000-m02 status: &{Name:ha-906000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0721 16:44:14.530068    5465 status.go:255] checking status of ha-906000-m04 ...
	I0721 16:44:14.530315    5465 cli_runner.go:164] Run: docker container inspect ha-906000-m04 --format={{.State.Status}}
	I0721 16:44:14.548304    5465 status.go:330] ha-906000-m04 host status = "Stopped" (err=<nil>)
	I0721 16:44:14.548327    5465 status.go:343] host is not running, skipping remaining checks
	I0721 16:44:14.548334    5465 status.go:257] ha-906000-m04 status: &{Name:ha-906000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (84.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-906000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0721 16:44:54.288538    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
E0721 16:45:21.972183    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-906000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m23.419530129s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (84.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-906000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-906000 --control-plane -v=7 --alsologtostderr: (34.293413365s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-906000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-852000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-852000 --driver=docker : (21.366196112s)
--- PASS: TestImageBuild/serial/Setup (21.37s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (3.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-852000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-852000: (3.970405872s)
--- PASS: TestImageBuild/serial/NormalBuild (3.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-852000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-852000: (1.436974078s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.44s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-852000
image_test.go:133: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-852000: (1.125567552s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-852000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-852000: (1.206732048s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.21s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-475000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-475000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m14.93001879s)
--- PASS: TestJSONOutput/start/Command (74.93s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-475000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-475000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.6s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-475000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-475000 --output=json --user=testUser: (5.59781533s)
--- PASS: TestJSONOutput/stop/Command (5.60s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-454000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-454000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (354.197296ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e146f65d-ffaf-492d-bc58-f5bab9823bf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-454000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d6a658a-f6d9-4bd0-99ed-ab93365f6f64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"62f36e21-6a29-48d5-bc7e-6efdf3b0fc61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig"}}
	{"specversion":"1.0","id":"022592e6-f899-4474-ac5a-1daa1c88c1b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"29d72c29-6619-42dc-a115-d9b92deb43d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cd83639c-c9d0-4e9b-a3e5-fdb084dd3f41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19312-1112/.minikube"}}
	{"specversion":"1.0","id":"ac073f15-5a50-438d-a909-0280307caff8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a16d12f-bfa7-4009-89b6-abe6583e8dbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-454000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-454000
--- PASS: TestErrorJSONOutput (0.57s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-863000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-863000 --network=: (21.389800928s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-863000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-863000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-863000: (1.932628964s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.34s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-961000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-961000 --network=bridge: (21.388849549s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-961000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-961000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-961000: (1.894137013s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.30s)

                                                
                                    
x
+
TestKicExistingNetwork (22.57s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-482000 --network=existing-network
E0721 16:49:14.554403    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-482000 --network=existing-network: (20.669940744s)
helpers_test.go:175: Cleaning up "existing-network-482000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-482000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-482000: (1.724265012s)
--- PASS: TestKicExistingNetwork (22.57s)

                                                
                                    
x
+
TestKicCustomSubnet (22.54s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-229000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-229000 --subnet=192.168.60.0/24: (20.55476222s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-229000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-229000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-229000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-229000: (1.970197649s)
--- PASS: TestKicCustomSubnet (22.54s)

                                                
                                    
x
+
TestKicStaticIP (23.59s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-967000 --static-ip=192.168.200.200
E0721 16:49:54.318601    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/functional-307000/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-967000 --static-ip=192.168.200.200: (21.440393242s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-967000 ip
helpers_test.go:175: Cleaning up "static-ip-967000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-967000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-967000: (1.970252889s)
--- PASS: TestKicStaticIP (23.59s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (48.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-671000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-671000 --driver=docker : (21.175177206s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-673000 --driver=docker 
E0721 16:50:37.604475    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-673000 --driver=docker : (22.396873742s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-671000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-673000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-673000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-673000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-673000: (1.977893541s)
helpers_test.go:175: Cleaning up "first-671000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-671000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-671000: (1.997526404s)
--- PASS: TestMinikubeProfile (48.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-349000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-349000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.484948713s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.49s)

                                                
                                    
x
+
TestPreload (133.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-385000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-385000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m35.464843963s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-385000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-385000 image pull gcr.io/k8s-minikube/busybox: (5.846922872s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-385000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-385000: (10.756189983s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-385000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
E0721 17:39:14.705117    2043 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19312-1112/.minikube/profiles/addons-860000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-385000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (19.539201826s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-385000 image list
helpers_test.go:175: Cleaning up "test-preload-385000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-385000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-385000: (2.057986809s)
--- PASS: TestPreload (133.90s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (9.77s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19312
- KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current688472622/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current688472622/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current688472622/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current688472622/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (9.77s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.13s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19312
- KUBECONFIG=/Users/jenkins/minikube-integration/19312-1112/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2591685318/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2591685318/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2591685318/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2591685318/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.13s)

                                                
                                    

Test skip (19/210)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 14.837346ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-x454p" [a0d637c8-2cdf-4f26-9394-f72adda95062] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004791407s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pk9bm" [2fe5ad2c-14c9-417c-ae8b-d2111d6a544e] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004025614s
addons_test.go:342: (dbg) Run:  kubectl --context addons-860000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-860000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-860000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.880817379s)
addons_test.go:357: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (18.99s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-860000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-860000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-860000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4ba6450a-99fc-4a22-ad01-6353cc0d32f0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4ba6450a-99fc-4a22-ad01-6353cc0d32f0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004416623s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-860000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.65s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-307000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-307000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-444fc" [aa18d70b-9855-4206-9a8c-cb8aa7898ef1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-444fc" [aa18d70b-9855-4206-9a8c-cb8aa7898ef1] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.020001791s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (13.14s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard