Test Report: Docker_macOS 19360

                    
                      cd79d30fb13c14d30ca0dbfe151ef256c3a20136:2024-07-31:35589
                    
                

Test fail (22/210)

x
+
TestOffline (758.08s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-436000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-436000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m37.536026977s)

                                                
                                                
-- stdout --
	* [offline-docker-436000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-436000" primary control-plane node in "offline-docker-436000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-436000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:43:37.654675   69067 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:43:37.654943   69067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:43:37.654948   69067 out.go:304] Setting ErrFile to fd 2...
	I0731 15:43:37.654952   69067 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:43:37.655113   69067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:43:37.656827   69067 out.go:298] Setting JSON to false
	I0731 15:43:37.680294   69067 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":22384,"bootTime":1722443433,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0731 15:43:37.680385   69067 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:43:37.702207   69067 out.go:177] * [offline-docker-436000] minikube v1.33.1 on Darwin 14.5
	I0731 15:43:37.743735   69067 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 15:43:37.743750   69067 notify.go:220] Checking for updates...
	I0731 15:43:37.785757   69067 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	I0731 15:43:37.806733   69067 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0731 15:43:37.827759   69067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:43:37.848795   69067 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	I0731 15:43:37.869668   69067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:43:37.891179   69067 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:43:37.915005   69067 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0731 15:43:37.915186   69067 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 15:43:38.037788   69067 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:170 SystemTime:2024-07-31 22:43:38.028093327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 15:43:38.080519   69067 out.go:177] * Using the docker driver based on user configuration
	I0731 15:43:38.101679   69067 start.go:297] selected driver: docker
	I0731 15:43:38.101707   69067 start.go:901] validating driver "docker" against <nil>
	I0731 15:43:38.101724   69067 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:43:38.106126   69067 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 15:43:38.200394   69067 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:170 SystemTime:2024-07-31 22:43:38.191096283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 15:43:38.200604   69067 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:43:38.200798   69067 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:43:38.222375   69067 out.go:177] * Using Docker Desktop driver with root privileges
	I0731 15:43:38.243492   69067 cni.go:84] Creating CNI manager for ""
	I0731 15:43:38.243516   69067 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:43:38.243523   69067 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:43:38.243581   69067 start.go:340] cluster config:
	{Name:offline-docker-436000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:43:38.264713   69067 out.go:177] * Starting "offline-docker-436000" primary control-plane node in "offline-docker-436000" cluster
	I0731 15:43:38.306583   69067 cache.go:121] Beginning downloading kic base image for docker with docker
	I0731 15:43:38.348511   69067 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0731 15:43:38.411480   69067 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:43:38.411528   69067 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 15:43:38.411566   69067 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 15:43:38.411589   69067 cache.go:56] Caching tarball of preloaded images
	I0731 15:43:38.411824   69067 preload.go:172] Found /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 15:43:38.411846   69067 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:43:38.413431   69067 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/offline-docker-436000/config.json ...
	I0731 15:43:38.413570   69067 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/offline-docker-436000/config.json: {Name:mkd8e72f7e99e705a57dba55db1871cafa5e5c72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0731 15:43:38.445720   69067 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0731 15:43:38.445738   69067 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 15:43:38.445890   69067 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 15:43:38.445913   69067 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 15:43:38.445920   69067 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 15:43:38.445928   69067 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 15:43:38.445932   69067 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0731 15:43:38.786293   69067 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0731 15:43:38.786350   69067 cache.go:194] Successfully downloaded all kic artifacts
	I0731 15:43:38.786400   69067 start.go:360] acquireMachinesLock for offline-docker-436000: {Name:mkebf63dd480c91e82c5c83fa330c16b877087e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:43:38.786635   69067 start.go:364] duration metric: took 222.341µs to acquireMachinesLock for "offline-docker-436000"
	I0731 15:43:38.786666   69067 start.go:93] Provisioning new machine with config: &{Name:offline-docker-436000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-436000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:43:38.786742   69067 start.go:125] createHost starting for "" (driver="docker")
	I0731 15:43:38.829189   69067 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0731 15:43:38.829379   69067 start.go:159] libmachine.API.Create for "offline-docker-436000" (driver="docker")
	I0731 15:43:38.829407   69067 client.go:168] LocalClient.Create starting
	I0731 15:43:38.829501   69067 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 15:43:38.829547   69067 main.go:141] libmachine: Decoding PEM data...
	I0731 15:43:38.829563   69067 main.go:141] libmachine: Parsing certificate...
	I0731 15:43:38.829634   69067 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 15:43:38.829672   69067 main.go:141] libmachine: Decoding PEM data...
	I0731 15:43:38.829680   69067 main.go:141] libmachine: Parsing certificate...
	I0731 15:43:38.830345   69067 cli_runner.go:164] Run: docker network inspect offline-docker-436000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 15:43:38.915540   69067 cli_runner.go:211] docker network inspect offline-docker-436000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 15:43:38.915655   69067 network_create.go:284] running [docker network inspect offline-docker-436000] to gather additional debugging logs...
	I0731 15:43:38.915674   69067 cli_runner.go:164] Run: docker network inspect offline-docker-436000
	W0731 15:43:38.933816   69067 cli_runner.go:211] docker network inspect offline-docker-436000 returned with exit code 1
	I0731 15:43:38.933845   69067 network_create.go:287] error running [docker network inspect offline-docker-436000]: docker network inspect offline-docker-436000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-436000 not found
	I0731 15:43:38.933867   69067 network_create.go:289] output of [docker network inspect offline-docker-436000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-436000 not found
	
	** /stderr **
	I0731 15:43:38.933979   69067 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:43:38.954235   69067 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:43:38.955759   69067 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:43:38.956120   69067 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ce5080}
	I0731 15:43:38.956139   69067 network_create.go:124] attempt to create docker network offline-docker-436000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0731 15:43:38.956211   69067 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-436000 offline-docker-436000
	W0731 15:43:38.974966   69067 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-436000 offline-docker-436000 returned with exit code 1
	W0731 15:43:38.975010   69067 network_create.go:149] failed to create docker network offline-docker-436000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-436000 offline-docker-436000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0731 15:43:38.975031   69067 network_create.go:116] failed to create docker network offline-docker-436000 192.168.67.0/24, will retry: subnet is taken
	I0731 15:43:38.976469   69067 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:43:38.976854   69067 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00155ee60}
	I0731 15:43:38.976868   69067 network_create.go:124] attempt to create docker network offline-docker-436000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0731 15:43:38.976940   69067 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-436000 offline-docker-436000
	I0731 15:43:39.045469   69067 network_create.go:108] docker network offline-docker-436000 192.168.76.0/24 created
	I0731 15:43:39.045522   69067 kic.go:121] calculated static IP "192.168.76.2" for the "offline-docker-436000" container
	I0731 15:43:39.045700   69067 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 15:43:39.066270   69067 cli_runner.go:164] Run: docker volume create offline-docker-436000 --label name.minikube.sigs.k8s.io=offline-docker-436000 --label created_by.minikube.sigs.k8s.io=true
	I0731 15:43:39.087342   69067 oci.go:103] Successfully created a docker volume offline-docker-436000
	I0731 15:43:39.087453   69067 cli_runner.go:164] Run: docker run --rm --name offline-docker-436000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-436000 --entrypoint /usr/bin/test -v offline-docker-436000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 15:43:39.750806   69067 oci.go:107] Successfully prepared a docker volume offline-docker-436000
	I0731 15:43:39.750853   69067 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:43:39.750866   69067 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 15:43:39.750981   69067 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-436000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 15:49:38.836700   69067 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:49:38.836866   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:49:38.857757   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:49:38.857875   69067 retry.go:31] will retry after 138.436939ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:38.998695   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:49:39.017493   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:49:39.017601   69067 retry.go:31] will retry after 541.018ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:39.559880   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:49:39.580020   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:49:39.580129   69067 retry.go:31] will retry after 502.106742ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:40.083316   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:49:40.103614   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	W0731 15:49:40.103722   69067 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	
	W0731 15:49:40.103751   69067 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:40.103810   69067 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:49:40.103867   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:49:40.122511   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:49:40.122617   69067 retry.go:31] will retry after 134.785298ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:40.259758   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:49:40.278534   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:49:40.278636   69067 retry.go:31] will retry after 348.793289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:40.629880   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:49:40.649970   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:49:40.650063   69067 retry.go:31] will retry after 401.214094ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:41.053694   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:49:41.072967   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:49:41.073062   69067 retry.go:31] will retry after 787.710107ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:41.863233   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:49:41.883110   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	W0731 15:49:41.883215   69067 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	
	W0731 15:49:41.883232   69067 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:41.883248   69067 start.go:128] duration metric: took 6m3.091533631s to createHost
	I0731 15:49:41.883254   69067 start.go:83] releasing machines lock for "offline-docker-436000", held for 6m3.091650721s
	W0731 15:49:41.883270   69067 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0731 15:49:41.883720   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:49:41.901802   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	I0731 15:49:41.901867   69067 delete.go:82] Unable to get host status for offline-docker-436000, assuming it has already been deleted: state: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	W0731 15:49:41.901972   69067 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0731 15:49:41.901981   69067 start.go:729] Will try again in 5 seconds ...
	I0731 15:49:46.905002   69067 start.go:360] acquireMachinesLock for offline-docker-436000: {Name:mkebf63dd480c91e82c5c83fa330c16b877087e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:49:46.905338   69067 start.go:364] duration metric: took 208.482µs to acquireMachinesLock for "offline-docker-436000"
	I0731 15:49:46.905376   69067 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:49:46.905394   69067 fix.go:54] fixHost starting: 
	I0731 15:49:46.905880   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:49:46.924809   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	I0731 15:49:46.924854   69067 fix.go:112] recreateIfNeeded on offline-docker-436000: state= err=unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:46.924872   69067 fix.go:117] machineExists: false. err=machine does not exist
	I0731 15:49:46.947749   69067 out.go:177] * docker "offline-docker-436000" container is missing, will recreate.
	I0731 15:49:46.969300   69067 delete.go:124] DEMOLISHING offline-docker-436000 ...
	I0731 15:49:46.969553   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:49:46.988220   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	W0731 15:49:46.988266   69067 stop.go:83] unable to get state: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:46.988283   69067 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:46.988660   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:49:47.005349   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	I0731 15:49:47.005415   69067 delete.go:82] Unable to get host status for offline-docker-436000, assuming it has already been deleted: state: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:47.005507   69067 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-436000
	W0731 15:49:47.022820   69067 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-436000 returned with exit code 1
	I0731 15:49:47.022872   69067 kic.go:371] could not find the container offline-docker-436000 to remove it. will try anyways
	I0731 15:49:47.022949   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:49:47.040027   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	W0731 15:49:47.040077   69067 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:47.040170   69067 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-436000 /bin/bash -c "sudo init 0"
	W0731 15:49:47.056938   69067 cli_runner.go:211] docker exec --privileged -t offline-docker-436000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0731 15:49:47.056985   69067 oci.go:650] error shutdown offline-docker-436000: docker exec --privileged -t offline-docker-436000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:48.059349   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:49:48.079830   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	I0731 15:49:48.079892   69067 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:48.079901   69067 oci.go:664] temporary error: container offline-docker-436000 status is  but expect it to be exited
	I0731 15:49:48.079924   69067 retry.go:31] will retry after 632.593936ms: couldn't verify container is exited. %v: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:48.713865   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:49:48.733973   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	I0731 15:49:48.734027   69067 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:48.734041   69067 oci.go:664] temporary error: container offline-docker-436000 status is  but expect it to be exited
	I0731 15:49:48.734063   69067 retry.go:31] will retry after 607.575763ms: couldn't verify container is exited. %v: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:49.342121   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:49:49.361711   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	I0731 15:49:49.361757   69067 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:49.361766   69067 oci.go:664] temporary error: container offline-docker-436000 status is  but expect it to be exited
	I0731 15:49:49.361790   69067 retry.go:31] will retry after 1.561891154s: couldn't verify container is exited. %v: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:50.926117   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:49:50.945803   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	I0731 15:49:50.945850   69067 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:50.945857   69067 oci.go:664] temporary error: container offline-docker-436000 status is  but expect it to be exited
	I0731 15:49:50.945883   69067 retry.go:31] will retry after 1.460966283s: couldn't verify container is exited. %v: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:52.408983   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:49:52.428516   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	I0731 15:49:52.428573   69067 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:52.428587   69067 oci.go:664] temporary error: container offline-docker-436000 status is  but expect it to be exited
	I0731 15:49:52.428614   69067 retry.go:31] will retry after 1.566008146s: couldn't verify container is exited. %v: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:53.997022   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:49:54.016646   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	I0731 15:49:54.016703   69067 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:54.016717   69067 oci.go:664] temporary error: container offline-docker-436000 status is  but expect it to be exited
	I0731 15:49:54.016746   69067 retry.go:31] will retry after 5.662344921s: couldn't verify container is exited. %v: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:59.679406   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:49:59.697944   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	I0731 15:49:59.697997   69067 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:49:59.698011   69067 oci.go:664] temporary error: container offline-docker-436000 status is  but expect it to be exited
	I0731 15:49:59.698037   69067 retry.go:31] will retry after 7.804155197s: couldn't verify container is exited. %v: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:50:07.504639   69067 cli_runner.go:164] Run: docker container inspect offline-docker-436000 --format={{.State.Status}}
	W0731 15:50:07.524577   69067 cli_runner.go:211] docker container inspect offline-docker-436000 --format={{.State.Status}} returned with exit code 1
	I0731 15:50:07.524624   69067 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:50:07.524633   69067 oci.go:664] temporary error: container offline-docker-436000 status is  but expect it to be exited
	I0731 15:50:07.524666   69067 oci.go:88] couldn't shut down offline-docker-436000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	 
	I0731 15:50:07.524754   69067 cli_runner.go:164] Run: docker rm -f -v offline-docker-436000
	I0731 15:50:07.542444   69067 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-436000
	W0731 15:50:07.560203   69067 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-436000 returned with exit code 1
	I0731 15:50:07.560337   69067 cli_runner.go:164] Run: docker network inspect offline-docker-436000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:50:07.578290   69067 cli_runner.go:164] Run: docker network rm offline-docker-436000
	I0731 15:50:07.659240   69067 fix.go:124] Sleeping 1 second for extra luck!
	I0731 15:50:08.660366   69067 start.go:125] createHost starting for "" (driver="docker")
	I0731 15:50:08.682504   69067 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0731 15:50:08.682693   69067 start.go:159] libmachine.API.Create for "offline-docker-436000" (driver="docker")
	I0731 15:50:08.682722   69067 client.go:168] LocalClient.Create starting
	I0731 15:50:08.682983   69067 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 15:50:08.683093   69067 main.go:141] libmachine: Decoding PEM data...
	I0731 15:50:08.683118   69067 main.go:141] libmachine: Parsing certificate...
	I0731 15:50:08.683209   69067 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 15:50:08.683295   69067 main.go:141] libmachine: Decoding PEM data...
	I0731 15:50:08.683312   69067 main.go:141] libmachine: Parsing certificate...
	I0731 15:50:08.704567   69067 cli_runner.go:164] Run: docker network inspect offline-docker-436000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 15:50:08.724055   69067 cli_runner.go:211] docker network inspect offline-docker-436000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 15:50:08.724154   69067 network_create.go:284] running [docker network inspect offline-docker-436000] to gather additional debugging logs...
	I0731 15:50:08.724168   69067 cli_runner.go:164] Run: docker network inspect offline-docker-436000
	W0731 15:50:08.740970   69067 cli_runner.go:211] docker network inspect offline-docker-436000 returned with exit code 1
	I0731 15:50:08.741004   69067 network_create.go:287] error running [docker network inspect offline-docker-436000]: docker network inspect offline-docker-436000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-436000 not found
	I0731 15:50:08.741024   69067 network_create.go:289] output of [docker network inspect offline-docker-436000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-436000 not found
	
	** /stderr **
	I0731 15:50:08.741155   69067 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:50:08.760214   69067 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:50:08.761855   69067 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:50:08.763316   69067 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:50:08.765157   69067 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:50:08.767004   69067 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:50:08.767658   69067 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001524570}
	I0731 15:50:08.767681   69067 network_create.go:124] attempt to create docker network offline-docker-436000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0731 15:50:08.767815   69067 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-436000 offline-docker-436000
	I0731 15:50:08.832647   69067 network_create.go:108] docker network offline-docker-436000 192.168.94.0/24 created
	I0731 15:50:08.832685   69067 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-436000" container
	I0731 15:50:08.832793   69067 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 15:50:08.851922   69067 cli_runner.go:164] Run: docker volume create offline-docker-436000 --label name.minikube.sigs.k8s.io=offline-docker-436000 --label created_by.minikube.sigs.k8s.io=true
	I0731 15:50:08.869138   69067 oci.go:103] Successfully created a docker volume offline-docker-436000
	I0731 15:50:08.869271   69067 cli_runner.go:164] Run: docker run --rm --name offline-docker-436000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-436000 --entrypoint /usr/bin/test -v offline-docker-436000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 15:50:09.132849   69067 oci.go:107] Successfully prepared a docker volume offline-docker-436000
	I0731 15:50:09.132899   69067 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:50:09.132916   69067 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 15:50:09.133013   69067 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-436000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 15:56:08.732281   69067 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:56:08.732373   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:08.752239   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:08.752353   69067 retry.go:31] will retry after 248.953114ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:09.002720   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:09.023031   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:09.023152   69067 retry.go:31] will retry after 349.362161ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:09.374222   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:09.393747   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:09.393853   69067 retry.go:31] will retry after 740.486287ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:10.134707   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:10.153736   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	W0731 15:56:10.153848   69067 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	
	W0731 15:56:10.153870   69067 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:10.153931   69067 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:56:10.153988   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:10.171365   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:10.171480   69067 retry.go:31] will retry after 300.95655ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:10.472714   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:10.491899   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:10.492007   69067 retry.go:31] will retry after 405.025865ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:10.898462   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:10.919528   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:10.919626   69067 retry.go:31] will retry after 374.585266ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:11.294536   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:11.315780   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:11.315894   69067 retry.go:31] will retry after 635.731077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:11.951957   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:11.969986   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	W0731 15:56:11.970096   69067 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	
	W0731 15:56:11.970113   69067 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:11.970124   69067 start.go:128] duration metric: took 6m3.260748128s to createHost
	I0731 15:56:11.970198   69067 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:56:11.970252   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:11.987100   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:11.987200   69067 retry.go:31] will retry after 260.845355ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:12.250439   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:12.269741   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:12.269834   69067 retry.go:31] will retry after 312.159708ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:12.582908   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:12.602617   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:12.602725   69067 retry.go:31] will retry after 781.617888ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:13.386798   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:13.405369   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	W0731 15:56:13.405467   69067 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	
	W0731 15:56:13.405486   69067 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:13.405556   69067 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:56:13.405631   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:13.422609   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:13.422702   69067 retry.go:31] will retry after 322.855892ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:13.748014   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:13.767129   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:13.767232   69067 retry.go:31] will retry after 408.763995ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:14.176830   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:14.196351   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	I0731 15:56:14.196449   69067 retry.go:31] will retry after 795.274267ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:14.992379   69067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000
	W0731 15:56:15.011798   69067 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000 returned with exit code 1
	W0731 15:56:15.011898   69067 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	
	W0731 15:56:15.011917   69067 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-436000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-436000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000
	I0731 15:56:15.011934   69067 fix.go:56] duration metric: took 6m28.05724193s for fixHost
	I0731 15:56:15.011943   69067 start.go:83] releasing machines lock for "offline-docker-436000", held for 6m28.05728976s
	W0731 15:56:15.012020   69067 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-436000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-436000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0731 15:56:15.055687   69067 out.go:177] 
	W0731 15:56:15.082496   69067 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0731 15:56:15.082541   69067 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0731 15:56:15.082567   69067 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0731 15:56:15.104692   69067 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-436000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-07-31 15:56:15.199607 -0700 PDT m=+6115.879043803
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-436000
helpers_test.go:235: (dbg) docker inspect offline-docker-436000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-436000",
	        "Id": "e031140d2bc5ff40b559faebcfd013a2de21fce12c530d3f1e93746835b04610",
	        "Created": "2024-07-31T22:50:08.78428448Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-436000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-436000 -n offline-docker-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-436000 -n offline-docker-436000: exit status 7 (72.397979ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:56:15.291973   69532 status.go:249] status error: host: state: unknown state "offline-docker-436000": docker container inspect offline-docker-436000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-436000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-436000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-436000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-436000
--- FAIL: TestOffline (758.08s)

                                                
                                    
x
+
TestCertOptions (7201.688s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-910000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0731 16:09:33.290059   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 16:13:41.182056   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (5m27s)
	TestCertOptions (4m55s)
	TestNetworkPlugins (30m42s)

                                                
                                                
goroutine 2544 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000672820, 0xc0009fbbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0008b0150, {0x6407ae0, 0x2a, 0x2a}, {0xc0000061c0?, 0xc0009fbc38?, 0x642aac0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000b18140)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000b18140)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0001a4a00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 194 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000900fc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 193
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 185 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x50a5900, 0xc00010ee40}, 0xc000810f50, 0xc000822f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x50a5900, 0xc00010ee40}, 0x50?, 0xc000810f50, 0xc000810f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x50a5900?, 0xc00010ee40?}, 0xc000b6f040?, 0x1f4c6a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000810fd0?, 0x1f929a4?, 0xc0008b0e88?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 195
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2535 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x4d99aa40, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001b8a300?, 0xc00133a298?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b8a300, {0xc00133a298, 0x568, 0x568})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001a76078, {0xc00133a298?, 0xc0015516c0?, 0x22e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00142c3f0, {0x5080618, 0xc00141e1f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x5080758, 0xc00142c3f0}, {0x5080618, 0xc00141e1f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0009f4678?, {0x5080758, 0xc00142c3f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x63c9320?, {0x5080758?, 0xc00142c3f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x5080758, 0xc00142c3f0}, {0x50806d8, 0xc001a76078}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001aba180?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 685
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 25 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 24
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 2258 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0014fe9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fe9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0014fe9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc0014fe9c0, 0x50759c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2259 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0014fed00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc0014fed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc0014fed00, 0x50759d8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 184 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000b331d0, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x4b6a760?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000900ea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b33200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009ae000, {0x5081c00, 0xc0008b4450}, 0x1, 0xc00010ee40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009ae000, 0x3b9aca00, 0x0, 0x1, 0xc00010ee40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 195
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 195 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b33200, 0xc00010ee40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 193
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 186 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 185
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 684 [syscall, 4 minutes]:
syscall.syscall6(0xc00142df80?, 0x1000000000010?, 0x10000000019?, 0x4daa5a38?, 0x90?, 0x6d4c108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000bcd8a0?, 0x1e190c5?, 0x90?, 0x4fe1f60?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x1f499e5?, 0xc000bcd8d4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc001b763f0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000208000)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000208000)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0007361a0, 0xc000208000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc0007361a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc0007361a0, 0x5075918)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2282 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00148d860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00148d860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00148d860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00148d860, 0xc0008c0680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2273
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 943 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x50a5900, 0xc00010ee40}, 0xc00080ff50, 0xc001473f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x50a5900, 0xc00010ee40}, 0x20?, 0xc00080ff50, 0xc00080ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x50a5900?, 0xc00010ee40?}, 0x239e016?, 0xc001bea900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x1f92945?, 0xc001525800?, 0xc0004d1620?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 962
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2573 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc000208000, 0xc00146e660)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 684
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2236 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0014fe000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fe000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0014fe000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc0014fe000, 0x5075a40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 774 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0x4d99a378, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0008c0100?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0008c0100)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0008c0100)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0014e6e20)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0014e6e20)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00017a3c0, {0x50987f0, 0xc0014e6e20})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc00017a3c0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0013a1520?, 0xc0013a1520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 771
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 685 [syscall, 5 minutes]:
syscall.syscall6(0xc00142df80?, 0x1000000000010?, 0x10000000019?, 0x4daa5a38?, 0x90?, 0x6d4c5b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc00164ba40?, 0x1e190c5?, 0x90?, 0x4fe1f60?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x1f499e5?, 0xc00164ba74, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc001b760c0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0001a8900)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0001a8900)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0007364e0, 0xc0001a8900)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc0007364e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc0007364e0, 0x5075910)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2184 [chan receive, 31 minutes]:
testing.(*T).Run(0xc00148c4e0, {0x39b862a?, 0x136abf86e164?}, 0xc0013b2090)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00148c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00148c4e0, 0x50759f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1220 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc0014239e0)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1199
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2278 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00148d1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00148d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00148d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00148d1e0, 0xc0008c0480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2273
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 958 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc00141ad80, 0xc000066780)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 957
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 942 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001b51e10, 0x2b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x4b6a760?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0009c4d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b51e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00186fca0, {0x5081c00, 0xc001432930}, 0x1, 0xc00010ee40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00186fca0, 0x3b9aca00, 0x0, 0x1, 0xc00010ee40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 962
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2273 [chan receive, 31 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00148c000, 0xc0013b2090)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2184
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1219 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc0014239e0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1199
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2281 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00148d6c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00148d6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00148d6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00148d6c0, 0xc0008c0600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2273
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2536 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x4d999f98, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001b8a3c0?, 0xc000527800?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b8a3c0, {0xc000527800, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001a76090, {0xc000527800?, 0x4db3c578?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00142c420, {0x5080618, 0xc00141e200})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x5080758, 0xc00142c420}, {0x5080618, 0xc00141e200}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x5080758, 0xc00142c420})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x63c9320?, {0x5080758?, 0xc00142c420?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x5080758, 0xc00142c420}, {0x50806d8, 0xc001a76090}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0017e5b00?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 685
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 1123 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c05500, 0xc001ae4960)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1122
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2186 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00148c9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00148c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc00148c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc00148c9c0, 0x5075a10)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2277 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00148d040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00148d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00148d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00148d040, 0xc0008c0400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2273
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2257 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0014fe820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fe820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0014fe820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc0014fe820, 0x5075a48)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2280 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00148d520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00148d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00148d520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00148d520, 0xc0008c0580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2273
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2276 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00148cea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00148cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00148cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00148cea0, 0xc0008c0300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2273
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2275 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00148cd00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00148cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00148cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00148cd00, 0xc0008c0200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2273
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2279 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00148d380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00148d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00148d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00148d380, 0xc0008c0500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2273
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2256 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0014fe680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0014fe680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0014fe680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc0014fe680, 0x5075a20)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1213 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc00133cd80, 0xc0000662a0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 887
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 961 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0009c4e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 960
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1170 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc00141b080, 0xc001ae5920)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1169
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2571 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x4d99a280, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001b8aa80?, 0xc0008fe28f?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b8aa80, {0xc0008fe28f, 0x571, 0x571})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001a760f8, {0xc0008fe28f?, 0xc0015516c0?, 0x225?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00142c810, {0x5080618, 0xc00141e168})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x5080758, 0xc00142c810}, {0x5080618, 0xc00141e168}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0009f6678?, {0x5080758, 0xc00142c810})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x63c9320?, {0x5080758?, 0xc00142c810?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x5080758, 0xc00142c810}, {0x50806d8, 0xc001a760f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001ae4660?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 684
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2185 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00148c820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00148c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc00148c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc00148c820, 0x5075a00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1879 [syscall, 97 minutes]:
syscall.syscall(0x0?, 0xc001a664b0?, 0x706972745f646e75?, 0x3a6f672e73726570?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0x6e696d203a746e65?, 0x61642d6562756b69?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1874
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 944 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 943
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 962 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b51e40, 0xc00010ee40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 960
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2537 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001a8900, 0xc001ae4720)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 685
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2274 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc000152690)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00148c1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00148c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00148c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00148c1a0, 0xc0008c0080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2273
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2572 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x4d99a850, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001b8ab40?, 0xc00089dc00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b8ab40, {0xc00089dc00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001a76110, {0xc00089dc00?, 0xc0015c5880?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00142c840, {0x5080618, 0xc00141e188})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x5080758, 0xc00142c840}, {0x5080618, 0xc00141e188}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000811678?, {0x5080758, 0xc00142c840})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x63c9320?, {0x5080758?, 0xc00142c840?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x5080758, 0xc00142c840}, {0x50806d8, 0xc001a76110}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001ae4840?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 684
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                    
x
+
TestDockerFlags (756.57s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-048000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0731 15:58:41.138786   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 15:59:33.285635   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 16:03:24.295166   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 16:03:41.142729   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 16:04:33.286967   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 16:08:41.145085   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-048000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m35.692521602s)

                                                
                                                
-- stdout --
	* [docker-flags-048000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-048000" primary control-plane node in "docker-flags-048000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-048000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:56:47.644720   69623 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:56:47.645003   69623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:56:47.645008   69623 out.go:304] Setting ErrFile to fd 2...
	I0731 15:56:47.645012   69623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:56:47.645187   69623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:56:47.646749   69623 out.go:298] Setting JSON to false
	I0731 15:56:47.669381   69623 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":23174,"bootTime":1722443433,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0731 15:56:47.669475   69623 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:56:47.691062   69623 out.go:177] * [docker-flags-048000] minikube v1.33.1 on Darwin 14.5
	I0731 15:56:47.734176   69623 notify.go:220] Checking for updates...
	I0731 15:56:47.754682   69623 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 15:56:47.775807   69623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	I0731 15:56:47.797075   69623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0731 15:56:47.818864   69623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:56:47.839997   69623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	I0731 15:56:47.861059   69623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:56:47.882628   69623 config.go:182] Loaded profile config "force-systemd-flag-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:56:47.882835   69623 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:56:47.906828   69623 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0731 15:56:47.906986   69623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 15:56:47.984340   69623 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:114 OomKillDisable:false NGoroutines:230 SystemTime:2024-07-31 22:56:47.975055043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 15:56:48.027827   69623 out.go:177] * Using the docker driver based on user configuration
	I0731 15:56:48.048824   69623 start.go:297] selected driver: docker
	I0731 15:56:48.048854   69623 start.go:901] validating driver "docker" against <nil>
	I0731 15:56:48.048869   69623 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:56:48.053338   69623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 15:56:48.128445   69623 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:114 OomKillDisable:false NGoroutines:230 SystemTime:2024-07-31 22:56:48.119518917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 15:56:48.128652   69623 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:56:48.128834   69623 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0731 15:56:48.150532   69623 out.go:177] * Using Docker Desktop driver with root privileges
	I0731 15:56:48.171483   69623 cni.go:84] Creating CNI manager for ""
	I0731 15:56:48.171529   69623 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:56:48.171542   69623 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:56:48.171643   69623 start.go:340] cluster config:
	{Name:docker-flags-048000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:56:48.193171   69623 out.go:177] * Starting "docker-flags-048000" primary control-plane node in "docker-flags-048000" cluster
	I0731 15:56:48.235445   69623 cache.go:121] Beginning downloading kic base image for docker with docker
	I0731 15:56:48.257035   69623 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0731 15:56:48.299272   69623 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:56:48.299314   69623 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 15:56:48.299360   69623 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 15:56:48.299384   69623 cache.go:56] Caching tarball of preloaded images
	I0731 15:56:48.299602   69623 preload.go:172] Found /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 15:56:48.299622   69623 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:56:48.300536   69623 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/docker-flags-048000/config.json ...
	I0731 15:56:48.300687   69623 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/docker-flags-048000/config.json: {Name:mk96c001f7c7214ce96f5fb023f228ab6bdcf46a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0731 15:56:48.325504   69623 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0731 15:56:48.325517   69623 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 15:56:48.325656   69623 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 15:56:48.325674   69623 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 15:56:48.325687   69623 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 15:56:48.325696   69623 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 15:56:48.325702   69623 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0731 15:56:48.328730   69623 image.go:273] response: 
	I0731 15:56:48.453202   69623 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0731 15:56:48.453254   69623 cache.go:194] Successfully downloaded all kic artifacts
	I0731 15:56:48.453313   69623 start.go:360] acquireMachinesLock for docker-flags-048000: {Name:mkd45f30b51ac6500632afbf9209211d152d907c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:56:48.453487   69623 start.go:364] duration metric: took 161.434µs to acquireMachinesLock for "docker-flags-048000"
	I0731 15:56:48.453514   69623 start.go:93] Provisioning new machine with config: &{Name:docker-flags-048000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-048000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:56:48.453573   69623 start.go:125] createHost starting for "" (driver="docker")
	I0731 15:56:48.496539   69623 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0731 15:56:48.496753   69623 start.go:159] libmachine.API.Create for "docker-flags-048000" (driver="docker")
	I0731 15:56:48.496783   69623 client.go:168] LocalClient.Create starting
	I0731 15:56:48.496921   69623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 15:56:48.496971   69623 main.go:141] libmachine: Decoding PEM data...
	I0731 15:56:48.496987   69623 main.go:141] libmachine: Parsing certificate...
	I0731 15:56:48.497041   69623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 15:56:48.497080   69623 main.go:141] libmachine: Decoding PEM data...
	I0731 15:56:48.497087   69623 main.go:141] libmachine: Parsing certificate...
	I0731 15:56:48.497628   69623 cli_runner.go:164] Run: docker network inspect docker-flags-048000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 15:56:48.515128   69623 cli_runner.go:211] docker network inspect docker-flags-048000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 15:56:48.515236   69623 network_create.go:284] running [docker network inspect docker-flags-048000] to gather additional debugging logs...
	I0731 15:56:48.515252   69623 cli_runner.go:164] Run: docker network inspect docker-flags-048000
	W0731 15:56:48.532294   69623 cli_runner.go:211] docker network inspect docker-flags-048000 returned with exit code 1
	I0731 15:56:48.532330   69623 network_create.go:287] error running [docker network inspect docker-flags-048000]: docker network inspect docker-flags-048000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-048000 not found
	I0731 15:56:48.532355   69623 network_create.go:289] output of [docker network inspect docker-flags-048000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-048000 not found
	
	** /stderr **
	I0731 15:56:48.532482   69623 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:56:48.551485   69623 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:56:48.552948   69623 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:56:48.554565   69623 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:56:48.556184   69623 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:56:48.556570   69623 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001702dd0}
	I0731 15:56:48.556586   69623 network_create.go:124] attempt to create docker network docker-flags-048000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0731 15:56:48.556664   69623 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-048000 docker-flags-048000
	I0731 15:56:48.620187   69623 network_create.go:108] docker network docker-flags-048000 192.168.85.0/24 created
	I0731 15:56:48.620314   69623 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-048000" container
	I0731 15:56:48.620419   69623 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 15:56:48.640027   69623 cli_runner.go:164] Run: docker volume create docker-flags-048000 --label name.minikube.sigs.k8s.io=docker-flags-048000 --label created_by.minikube.sigs.k8s.io=true
	I0731 15:56:48.658247   69623 oci.go:103] Successfully created a docker volume docker-flags-048000
	I0731 15:56:48.658361   69623 cli_runner.go:164] Run: docker run --rm --name docker-flags-048000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-048000 --entrypoint /usr/bin/test -v docker-flags-048000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 15:56:49.078899   69623 oci.go:107] Successfully prepared a docker volume docker-flags-048000
	I0731 15:56:49.078955   69623 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:56:49.078968   69623 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 15:56:49.079082   69623 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-048000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 16:02:48.502401   69623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 16:02:48.502542   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:02:48.522639   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:02:48.522764   69623 retry.go:31] will retry after 174.177702ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:48.699373   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:02:48.720036   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:02:48.720133   69623 retry.go:31] will retry after 210.805134ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:48.931722   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:02:48.951264   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:02:48.951353   69623 retry.go:31] will retry after 515.047993ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:49.468762   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:02:49.488785   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:02:49.488893   69623 retry.go:31] will retry after 423.800451ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:49.914538   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:02:49.934522   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	W0731 16:02:49.934651   69623 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	
	W0731 16:02:49.934678   69623 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:49.934741   69623 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 16:02:49.934805   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:02:49.952932   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:02:49.953039   69623 retry.go:31] will retry after 131.030018ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:50.085900   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:02:50.105617   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:02:50.105705   69623 retry.go:31] will retry after 458.779717ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:50.566945   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:02:50.587169   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:02:50.587258   69623 retry.go:31] will retry after 413.603639ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:51.003250   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:02:51.022481   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:02:51.022575   69623 retry.go:31] will retry after 614.551431ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:51.639580   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:02:51.659291   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	W0731 16:02:51.659397   69623 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	
	W0731 16:02:51.659421   69623 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:51.659438   69623 start.go:128] duration metric: took 6m3.202658004s to createHost
	I0731 16:02:51.659451   69623 start.go:83] releasing machines lock for "docker-flags-048000", held for 6m3.202764548s
	W0731 16:02:51.659468   69623 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0731 16:02:51.659921   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:02:51.677444   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:51.677504   69623 delete.go:82] Unable to get host status for docker-flags-048000, assuming it has already been deleted: state: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	W0731 16:02:51.677595   69623 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0731 16:02:51.677605   69623 start.go:729] Will try again in 5 seconds ...
	I0731 16:02:56.678929   69623 start.go:360] acquireMachinesLock for docker-flags-048000: {Name:mkd45f30b51ac6500632afbf9209211d152d907c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:02:56.680053   69623 start.go:364] duration metric: took 333.012µs to acquireMachinesLock for "docker-flags-048000"
	I0731 16:02:56.680114   69623 start.go:96] Skipping create...Using existing machine configuration
	I0731 16:02:56.680135   69623 fix.go:54] fixHost starting: 
	I0731 16:02:56.680628   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:02:56.699747   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:56.699809   69623 fix.go:112] recreateIfNeeded on docker-flags-048000: state= err=unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:56.699837   69623 fix.go:117] machineExists: false. err=machine does not exist
	I0731 16:02:56.721714   69623 out.go:177] * docker "docker-flags-048000" container is missing, will recreate.
	I0731 16:02:56.743257   69623 delete.go:124] DEMOLISHING docker-flags-048000 ...
	I0731 16:02:56.743444   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:02:56.764006   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	W0731 16:02:56.764067   69623 stop.go:83] unable to get state: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:56.764082   69623 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:56.764457   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:02:56.781685   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:56.781740   69623 delete.go:82] Unable to get host status for docker-flags-048000, assuming it has already been deleted: state: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:56.781834   69623 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-048000
	W0731 16:02:56.798968   69623 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-048000 returned with exit code 1
	I0731 16:02:56.799004   69623 kic.go:371] could not find the container docker-flags-048000 to remove it. will try anyways
	I0731 16:02:56.799091   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:02:56.816075   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	W0731 16:02:56.816131   69623 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:56.816216   69623 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-048000 /bin/bash -c "sudo init 0"
	W0731 16:02:56.833021   69623 cli_runner.go:211] docker exec --privileged -t docker-flags-048000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0731 16:02:56.833054   69623 oci.go:650] error shutdown docker-flags-048000: docker exec --privileged -t docker-flags-048000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:57.835503   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:02:57.855381   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:57.855430   69623 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:57.855441   69623 oci.go:664] temporary error: container docker-flags-048000 status is  but expect it to be exited
	I0731 16:02:57.855467   69623 retry.go:31] will retry after 545.711473ms: couldn't verify container is exited. %v: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:58.403600   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:02:58.423910   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:58.423955   69623 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:58.423968   69623 oci.go:664] temporary error: container docker-flags-048000 status is  but expect it to be exited
	I0731 16:02:58.423993   69623 retry.go:31] will retry after 495.322837ms: couldn't verify container is exited. %v: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:58.919805   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:02:58.939258   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:58.939318   69623 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:02:58.939330   69623 oci.go:664] temporary error: container docker-flags-048000 status is  but expect it to be exited
	I0731 16:02:58.939352   69623 retry.go:31] will retry after 1.638541295s: couldn't verify container is exited. %v: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:03:00.578989   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:03:00.598319   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	I0731 16:03:00.598363   69623 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:03:00.598372   69623 oci.go:664] temporary error: container docker-flags-048000 status is  but expect it to be exited
	I0731 16:03:00.598398   69623 retry.go:31] will retry after 896.430873ms: couldn't verify container is exited. %v: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:03:01.497236   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:03:01.517351   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	I0731 16:03:01.517398   69623 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:03:01.517410   69623 oci.go:664] temporary error: container docker-flags-048000 status is  but expect it to be exited
	I0731 16:03:01.517436   69623 retry.go:31] will retry after 1.824141615s: couldn't verify container is exited. %v: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:03:03.343939   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:03:03.363909   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	I0731 16:03:03.363956   69623 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:03:03.363971   69623 oci.go:664] temporary error: container docker-flags-048000 status is  but expect it to be exited
	I0731 16:03:03.363996   69623 retry.go:31] will retry after 4.928888173s: couldn't verify container is exited. %v: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:03:08.295363   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:03:08.315994   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	I0731 16:03:08.316041   69623 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:03:08.316055   69623 oci.go:664] temporary error: container docker-flags-048000 status is  but expect it to be exited
	I0731 16:03:08.316086   69623 retry.go:31] will retry after 3.304272849s: couldn't verify container is exited. %v: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:03:11.622806   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:03:11.642144   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	I0731 16:03:11.642192   69623 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:03:11.642201   69623 oci.go:664] temporary error: container docker-flags-048000 status is  but expect it to be exited
	I0731 16:03:11.642224   69623 retry.go:31] will retry after 4.38325432s: couldn't verify container is exited. %v: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:03:16.026311   69623 cli_runner.go:164] Run: docker container inspect docker-flags-048000 --format={{.State.Status}}
	W0731 16:03:16.046303   69623 cli_runner.go:211] docker container inspect docker-flags-048000 --format={{.State.Status}} returned with exit code 1
	I0731 16:03:16.046348   69623 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:03:16.046359   69623 oci.go:664] temporary error: container docker-flags-048000 status is  but expect it to be exited
	I0731 16:03:16.046392   69623 oci.go:88] couldn't shut down docker-flags-048000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	 
	I0731 16:03:16.046481   69623 cli_runner.go:164] Run: docker rm -f -v docker-flags-048000
	I0731 16:03:16.065083   69623 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-048000
	W0731 16:03:16.082267   69623 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-048000 returned with exit code 1
	I0731 16:03:16.082388   69623 cli_runner.go:164] Run: docker network inspect docker-flags-048000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 16:03:16.100053   69623 cli_runner.go:164] Run: docker network rm docker-flags-048000
	I0731 16:03:16.178428   69623 fix.go:124] Sleeping 1 second for extra luck!
	I0731 16:03:17.179106   69623 start.go:125] createHost starting for "" (driver="docker")
	I0731 16:03:17.202422   69623 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0731 16:03:17.202618   69623 start.go:159] libmachine.API.Create for "docker-flags-048000" (driver="docker")
	I0731 16:03:17.202649   69623 client.go:168] LocalClient.Create starting
	I0731 16:03:17.202876   69623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 16:03:17.203008   69623 main.go:141] libmachine: Decoding PEM data...
	I0731 16:03:17.203044   69623 main.go:141] libmachine: Parsing certificate...
	I0731 16:03:17.203133   69623 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 16:03:17.203235   69623 main.go:141] libmachine: Decoding PEM data...
	I0731 16:03:17.203252   69623 main.go:141] libmachine: Parsing certificate...
	I0731 16:03:17.224501   69623 cli_runner.go:164] Run: docker network inspect docker-flags-048000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 16:03:17.243731   69623 cli_runner.go:211] docker network inspect docker-flags-048000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 16:03:17.243824   69623 network_create.go:284] running [docker network inspect docker-flags-048000] to gather additional debugging logs...
	I0731 16:03:17.243844   69623 cli_runner.go:164] Run: docker network inspect docker-flags-048000
	W0731 16:03:17.261225   69623 cli_runner.go:211] docker network inspect docker-flags-048000 returned with exit code 1
	I0731 16:03:17.261254   69623 network_create.go:287] error running [docker network inspect docker-flags-048000]: docker network inspect docker-flags-048000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-048000 not found
	I0731 16:03:17.261270   69623 network_create.go:289] output of [docker network inspect docker-flags-048000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-048000 not found
	
	** /stderr **
	I0731 16:03:17.261431   69623 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 16:03:17.280730   69623 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 16:03:17.282347   69623 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 16:03:17.283952   69623 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 16:03:17.285639   69623 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 16:03:17.287475   69623 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 16:03:17.289352   69623 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 16:03:17.290041   69623 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00160d7f0}
	I0731 16:03:17.290065   69623 network_create.go:124] attempt to create docker network docker-flags-048000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0731 16:03:17.290180   69623 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-048000 docker-flags-048000
	I0731 16:03:17.354082   69623 network_create.go:108] docker network docker-flags-048000 192.168.103.0/24 created
	I0731 16:03:17.354120   69623 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-048000" container
	I0731 16:03:17.354240   69623 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 16:03:17.374083   69623 cli_runner.go:164] Run: docker volume create docker-flags-048000 --label name.minikube.sigs.k8s.io=docker-flags-048000 --label created_by.minikube.sigs.k8s.io=true
	I0731 16:03:17.391611   69623 oci.go:103] Successfully created a docker volume docker-flags-048000
	I0731 16:03:17.391756   69623 cli_runner.go:164] Run: docker run --rm --name docker-flags-048000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-048000 --entrypoint /usr/bin/test -v docker-flags-048000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 16:03:17.640766   69623 oci.go:107] Successfully prepared a docker volume docker-flags-048000
	I0731 16:03:17.640805   69623 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 16:03:17.640819   69623 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 16:03:17.640923   69623 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-048000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 16:09:17.206595   69623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 16:09:17.206724   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:17.225889   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:17.225996   69623 retry.go:31] will retry after 274.977723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:17.503394   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:17.522976   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:17.523077   69623 retry.go:31] will retry after 455.882962ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:17.981501   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:18.000849   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:18.000959   69623 retry.go:31] will retry after 594.425418ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:18.597850   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:18.617487   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:18.617595   69623 retry.go:31] will retry after 537.377012ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:19.156857   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:19.176520   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	W0731 16:09:19.176623   69623 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	
	W0731 16:09:19.176644   69623 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:19.176705   69623 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 16:09:19.176768   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:19.193864   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:19.193975   69623 retry.go:31] will retry after 317.270079ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:19.513668   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:19.533527   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:19.533620   69623 retry.go:31] will retry after 503.95616ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:20.040089   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:20.060090   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:20.060186   69623 retry.go:31] will retry after 438.031383ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:20.500672   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:20.520758   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	W0731 16:09:20.520870   69623 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	
	W0731 16:09:20.520891   69623 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:20.520896   69623 start.go:128] duration metric: took 6m3.338476318s to createHost
	I0731 16:09:20.520975   69623 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 16:09:20.521037   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:20.538294   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:20.538396   69623 retry.go:31] will retry after 268.685166ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:20.808412   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:20.828381   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:20.828484   69623 retry.go:31] will retry after 284.299907ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:21.115169   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:21.134587   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:21.134682   69623 retry.go:31] will retry after 531.606637ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:21.668734   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:21.688300   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	W0731 16:09:21.688403   69623 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	
	W0731 16:09:21.688422   69623 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:21.688482   69623 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 16:09:21.688551   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:21.705477   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:21.705685   69623 retry.go:31] will retry after 139.383506ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:21.847450   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:21.866031   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:21.866221   69623 retry.go:31] will retry after 504.345685ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:22.371692   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:22.391837   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	I0731 16:09:22.391932   69623 retry.go:31] will retry after 703.7784ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:23.097980   69623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000
	W0731 16:09:23.118771   69623 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000 returned with exit code 1
	W0731 16:09:23.118865   69623 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	
	W0731 16:09:23.118884   69623 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-048000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-048000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	I0731 16:09:23.118896   69623 fix.go:56] duration metric: took 6m26.435368233s for fixHost
	I0731 16:09:23.118904   69623 start.go:83] releasing machines lock for "docker-flags-048000", held for 6m26.43541694s
	W0731 16:09:23.118982   69623 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-048000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-048000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0731 16:09:23.164409   69623 out.go:177] 
	W0731 16:09:23.185482   69623 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0731 16:09:23.185541   69623 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0731 16:09:23.185588   69623 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0731 16:09:23.207373   69623 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-048000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-048000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-048000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (162.666822ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-048000 host status: state: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-048000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-048000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-048000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (160.7581ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-048000 host status: state: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-048000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-048000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-31 16:09:23.627793 -0700 PDT m=+6904.300298472
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-048000
helpers_test.go:235: (dbg) docker inspect docker-flags-048000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-048000",
	        "Id": "dc266eed5e6cbf64a7dc4aba607cbf703e8df638664d5fb10dd84876ae337205",
	        "Created": "2024-07-31T23:03:17.305683228Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-048000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-048000 -n docker-flags-048000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-048000 -n docker-flags-048000: exit status 7 (73.741633ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 16:09:23.720856   70268 status.go:249] status error: host: state: unknown state "docker-flags-048000": docker container inspect docker-flags-048000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-048000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-048000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-048000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-048000
--- FAIL: TestDockerFlags (756.57s)

                                                
                                    
x
+
TestForceSystemdFlag (756.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-942000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-942000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m35.471706887s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-942000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-942000" primary control-plane node in "force-systemd-flag-942000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-942000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:56:15.787420   69546 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:56:15.787605   69546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:56:15.787611   69546 out.go:304] Setting ErrFile to fd 2...
	I0731 15:56:15.787615   69546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:56:15.787813   69546 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:56:15.789252   69546 out.go:298] Setting JSON to false
	I0731 15:56:15.811927   69546 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":23142,"bootTime":1722443433,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0731 15:56:15.812023   69546 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:56:15.833991   69546 out.go:177] * [force-systemd-flag-942000] minikube v1.33.1 on Darwin 14.5
	I0731 15:56:15.876684   69546 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 15:56:15.876767   69546 notify.go:220] Checking for updates...
	I0731 15:56:15.919645   69546 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	I0731 15:56:15.940643   69546 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0731 15:56:15.961802   69546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:56:16.003656   69546 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	I0731 15:56:16.024942   69546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:56:16.047153   69546 config.go:182] Loaded profile config "force-systemd-env-617000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:56:16.047252   69546 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:56:16.070233   69546 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0731 15:56:16.070515   69546 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 15:56:16.148155   69546 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:110 OomKillDisable:false NGoroutines:218 SystemTime:2024-07-31 22:56:16.138756568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 15:56:16.191352   69546 out.go:177] * Using the docker driver based on user configuration
	I0731 15:56:16.213235   69546 start.go:297] selected driver: docker
	I0731 15:56:16.213264   69546 start.go:901] validating driver "docker" against <nil>
	I0731 15:56:16.213279   69546 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:56:16.217641   69546 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 15:56:16.293903   69546 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:110 OomKillDisable:false NGoroutines:218 SystemTime:2024-07-31 22:56:16.285089333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 15:56:16.294100   69546 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:56:16.294282   69546 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 15:56:16.316320   69546 out.go:177] * Using Docker Desktop driver with root privileges
	I0731 15:56:16.344589   69546 cni.go:84] Creating CNI manager for ""
	I0731 15:56:16.344638   69546 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:56:16.344660   69546 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:56:16.344758   69546 start.go:340] cluster config:
	{Name:force-systemd-flag-942000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-942000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:56:16.365779   69546 out.go:177] * Starting "force-systemd-flag-942000" primary control-plane node in "force-systemd-flag-942000" cluster
	I0731 15:56:16.407864   69546 cache.go:121] Beginning downloading kic base image for docker with docker
	I0731 15:56:16.430612   69546 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0731 15:56:16.472701   69546 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:56:16.472741   69546 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 15:56:16.472795   69546 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 15:56:16.472818   69546 cache.go:56] Caching tarball of preloaded images
	I0731 15:56:16.473066   69546 preload.go:172] Found /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 15:56:16.473085   69546 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:56:16.474018   69546 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/force-systemd-flag-942000/config.json ...
	I0731 15:56:16.474171   69546 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/force-systemd-flag-942000/config.json: {Name:mka054360bf3913b654a4553dc7efba2bc5b3806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0731 15:56:16.498833   69546 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0731 15:56:16.498859   69546 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 15:56:16.498989   69546 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 15:56:16.499008   69546 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 15:56:16.499014   69546 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 15:56:16.499022   69546 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 15:56:16.499026   69546 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0731 15:56:16.502095   69546 image.go:273] response: 
	I0731 15:56:16.644135   69546 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0731 15:56:16.644183   69546 cache.go:194] Successfully downloaded all kic artifacts
	I0731 15:56:16.644226   69546 start.go:360] acquireMachinesLock for force-systemd-flag-942000: {Name:mk6835ffe9ed6235b01b12b1569bba583a22908a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:56:16.644404   69546 start.go:364] duration metric: took 166.092µs to acquireMachinesLock for "force-systemd-flag-942000"
	I0731 15:56:16.644436   69546 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-942000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-942000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:56:16.644492   69546 start.go:125] createHost starting for "" (driver="docker")
	I0731 15:56:16.687036   69546 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0731 15:56:16.687237   69546 start.go:159] libmachine.API.Create for "force-systemd-flag-942000" (driver="docker")
	I0731 15:56:16.687268   69546 client.go:168] LocalClient.Create starting
	I0731 15:56:16.687396   69546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 15:56:16.687448   69546 main.go:141] libmachine: Decoding PEM data...
	I0731 15:56:16.687466   69546 main.go:141] libmachine: Parsing certificate...
	I0731 15:56:16.687526   69546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 15:56:16.687564   69546 main.go:141] libmachine: Decoding PEM data...
	I0731 15:56:16.687572   69546 main.go:141] libmachine: Parsing certificate...
	I0731 15:56:16.688110   69546 cli_runner.go:164] Run: docker network inspect force-systemd-flag-942000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 15:56:16.705422   69546 cli_runner.go:211] docker network inspect force-systemd-flag-942000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 15:56:16.705533   69546 network_create.go:284] running [docker network inspect force-systemd-flag-942000] to gather additional debugging logs...
	I0731 15:56:16.705553   69546 cli_runner.go:164] Run: docker network inspect force-systemd-flag-942000
	W0731 15:56:16.722841   69546 cli_runner.go:211] docker network inspect force-systemd-flag-942000 returned with exit code 1
	I0731 15:56:16.722869   69546 network_create.go:287] error running [docker network inspect force-systemd-flag-942000]: docker network inspect force-systemd-flag-942000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-942000 not found
	I0731 15:56:16.722883   69546 network_create.go:289] output of [docker network inspect force-systemd-flag-942000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-942000 not found
	
	** /stderr **
	I0731 15:56:16.723037   69546 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:56:16.741999   69546 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:56:16.743407   69546 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:56:16.743763   69546 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014670e0}
	I0731 15:56:16.743780   69546 network_create.go:124] attempt to create docker network force-systemd-flag-942000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0731 15:56:16.743853   69546 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-942000 force-systemd-flag-942000
	W0731 15:56:16.761387   69546 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-942000 force-systemd-flag-942000 returned with exit code 1
	W0731 15:56:16.761426   69546 network_create.go:149] failed to create docker network force-systemd-flag-942000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-942000 force-systemd-flag-942000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0731 15:56:16.761446   69546 network_create.go:116] failed to create docker network force-systemd-flag-942000 192.168.67.0/24, will retry: subnet is taken
	I0731 15:56:16.762842   69546 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:56:16.763211   69546 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015e8cf0}
	I0731 15:56:16.763226   69546 network_create.go:124] attempt to create docker network force-systemd-flag-942000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0731 15:56:16.763295   69546 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-942000 force-systemd-flag-942000
	I0731 15:56:16.826527   69546 network_create.go:108] docker network force-systemd-flag-942000 192.168.76.0/24 created
	I0731 15:56:16.826566   69546 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-942000" container
	I0731 15:56:16.826680   69546 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 15:56:16.845995   69546 cli_runner.go:164] Run: docker volume create force-systemd-flag-942000 --label name.minikube.sigs.k8s.io=force-systemd-flag-942000 --label created_by.minikube.sigs.k8s.io=true
	I0731 15:56:16.864773   69546 oci.go:103] Successfully created a docker volume force-systemd-flag-942000
	I0731 15:56:16.864883   69546 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-942000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-942000 --entrypoint /usr/bin/test -v force-systemd-flag-942000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 15:56:17.265732   69546 oci.go:107] Successfully prepared a docker volume force-systemd-flag-942000
	I0731 15:56:17.265779   69546 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:56:17.265792   69546 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 15:56:17.265916   69546 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-942000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 16:02:16.692459   69546 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 16:02:16.692707   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:02:16.713357   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:02:16.713464   69546 retry.go:31] will retry after 258.310713ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:16.972495   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:02:16.991848   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:02:16.991960   69546 retry.go:31] will retry after 252.375475ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:17.246787   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:02:17.266638   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:02:17.266734   69546 retry.go:31] will retry after 509.863182ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:17.778974   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:02:17.798521   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:02:17.798648   69546 retry.go:31] will retry after 555.624053ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:18.356660   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:02:18.376950   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	W0731 16:02:18.377057   69546 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	
	W0731 16:02:18.377077   69546 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:18.377145   69546 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 16:02:18.377203   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:02:18.394025   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:02:18.394120   69546 retry.go:31] will retry after 317.050421ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:18.711533   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:02:18.731020   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:02:18.731131   69546 retry.go:31] will retry after 293.771665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:19.027307   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:02:19.047056   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:02:19.047151   69546 retry.go:31] will retry after 821.418523ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:19.868993   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:02:19.888807   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	W0731 16:02:19.888929   69546 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	
	W0731 16:02:19.888951   69546 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:19.888961   69546 start.go:128] duration metric: took 6m3.241265627s to createHost
	I0731 16:02:19.888972   69546 start.go:83] releasing machines lock for "force-systemd-flag-942000", held for 6m3.241367023s
	W0731 16:02:19.888987   69546 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0731 16:02:19.889433   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:19.906255   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:19.906314   69546 delete.go:82] Unable to get host status for force-systemd-flag-942000, assuming it has already been deleted: state: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	W0731 16:02:19.906404   69546 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0731 16:02:19.906415   69546 start.go:729] Will try again in 5 seconds ...
	I0731 16:02:24.908680   69546 start.go:360] acquireMachinesLock for force-systemd-flag-942000: {Name:mk6835ffe9ed6235b01b12b1569bba583a22908a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 16:02:24.908899   69546 start.go:364] duration metric: took 170.961µs to acquireMachinesLock for "force-systemd-flag-942000"
	I0731 16:02:24.908938   69546 start.go:96] Skipping create...Using existing machine configuration
	I0731 16:02:24.908959   69546 fix.go:54] fixHost starting: 
	I0731 16:02:24.909388   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:24.929479   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:24.929541   69546 fix.go:112] recreateIfNeeded on force-systemd-flag-942000: state= err=unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:24.929558   69546 fix.go:117] machineExists: false. err=machine does not exist
	I0731 16:02:24.951745   69546 out.go:177] * docker "force-systemd-flag-942000" container is missing, will recreate.
	I0731 16:02:24.994131   69546 delete.go:124] DEMOLISHING force-systemd-flag-942000 ...
	I0731 16:02:24.994326   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:25.012725   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	W0731 16:02:25.012774   69546 stop.go:83] unable to get state: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:25.012792   69546 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:25.013189   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:25.030161   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:25.030216   69546 delete.go:82] Unable to get host status for force-systemd-flag-942000, assuming it has already been deleted: state: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:25.030307   69546 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-942000
	W0731 16:02:25.047290   69546 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-942000 returned with exit code 1
	I0731 16:02:25.047329   69546 kic.go:371] could not find the container force-systemd-flag-942000 to remove it. will try anyways
	I0731 16:02:25.047409   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:25.064299   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	W0731 16:02:25.064364   69546 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:25.064450   69546 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-942000 /bin/bash -c "sudo init 0"
	W0731 16:02:25.081889   69546 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-942000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0731 16:02:25.081919   69546 oci.go:650] error shutdown force-systemd-flag-942000: docker exec --privileged -t force-systemd-flag-942000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:26.084335   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:26.103660   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:26.103706   69546 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:26.103715   69546 oci.go:664] temporary error: container force-systemd-flag-942000 status is  but expect it to be exited
	I0731 16:02:26.103736   69546 retry.go:31] will retry after 377.597151ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:26.482765   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:26.502313   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:26.502363   69546 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:26.502373   69546 oci.go:664] temporary error: container force-systemd-flag-942000 status is  but expect it to be exited
	I0731 16:02:26.502401   69546 retry.go:31] will retry after 1.116840171s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:27.620553   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:27.641193   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:27.641239   69546 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:27.641249   69546 oci.go:664] temporary error: container force-systemd-flag-942000 status is  but expect it to be exited
	I0731 16:02:27.641275   69546 retry.go:31] will retry after 1.170965609s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:28.814460   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:28.834361   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:28.834420   69546 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:28.834436   69546 oci.go:664] temporary error: container force-systemd-flag-942000 status is  but expect it to be exited
	I0731 16:02:28.834474   69546 retry.go:31] will retry after 1.220935841s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:30.057866   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:30.077304   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:30.077351   69546 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:30.077360   69546 oci.go:664] temporary error: container force-systemd-flag-942000 status is  but expect it to be exited
	I0731 16:02:30.077385   69546 retry.go:31] will retry after 3.704053442s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:33.782987   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:33.803424   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:33.803475   69546 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:33.803486   69546 oci.go:664] temporary error: container force-systemd-flag-942000 status is  but expect it to be exited
	I0731 16:02:33.803514   69546 retry.go:31] will retry after 2.155634633s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:35.961551   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:35.981099   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:35.981154   69546 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:35.981165   69546 oci.go:664] temporary error: container force-systemd-flag-942000 status is  but expect it to be exited
	I0731 16:02:35.981201   69546 retry.go:31] will retry after 6.760987934s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:42.743923   69546 cli_runner.go:164] Run: docker container inspect force-systemd-flag-942000 --format={{.State.Status}}
	W0731 16:02:42.763804   69546 cli_runner.go:211] docker container inspect force-systemd-flag-942000 --format={{.State.Status}} returned with exit code 1
	I0731 16:02:42.763859   69546 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:02:42.763871   69546 oci.go:664] temporary error: container force-systemd-flag-942000 status is  but expect it to be exited
	I0731 16:02:42.763899   69546 oci.go:88] couldn't shut down force-systemd-flag-942000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	 
	I0731 16:02:42.763975   69546 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-942000
	I0731 16:02:42.781672   69546 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-942000
	W0731 16:02:42.798742   69546 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-942000 returned with exit code 1
	I0731 16:02:42.798858   69546 cli_runner.go:164] Run: docker network inspect force-systemd-flag-942000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 16:02:42.816250   69546 cli_runner.go:164] Run: docker network rm force-systemd-flag-942000
	I0731 16:02:42.897626   69546 fix.go:124] Sleeping 1 second for extra luck!
	I0731 16:02:43.899784   69546 start.go:125] createHost starting for "" (driver="docker")
	I0731 16:02:43.922003   69546 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0731 16:02:43.922132   69546 start.go:159] libmachine.API.Create for "force-systemd-flag-942000" (driver="docker")
	I0731 16:02:43.922155   69546 client.go:168] LocalClient.Create starting
	I0731 16:02:43.922316   69546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 16:02:43.922383   69546 main.go:141] libmachine: Decoding PEM data...
	I0731 16:02:43.922401   69546 main.go:141] libmachine: Parsing certificate...
	I0731 16:02:43.922467   69546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 16:02:43.922522   69546 main.go:141] libmachine: Decoding PEM data...
	I0731 16:02:43.922533   69546 main.go:141] libmachine: Parsing certificate...
	I0731 16:02:43.923021   69546 cli_runner.go:164] Run: docker network inspect force-systemd-flag-942000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 16:02:43.941168   69546 cli_runner.go:211] docker network inspect force-systemd-flag-942000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 16:02:43.941276   69546 network_create.go:284] running [docker network inspect force-systemd-flag-942000] to gather additional debugging logs...
	I0731 16:02:43.941295   69546 cli_runner.go:164] Run: docker network inspect force-systemd-flag-942000
	W0731 16:02:43.959086   69546 cli_runner.go:211] docker network inspect force-systemd-flag-942000 returned with exit code 1
	I0731 16:02:43.959117   69546 network_create.go:287] error running [docker network inspect force-systemd-flag-942000]: docker network inspect force-systemd-flag-942000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-942000 not found
	I0731 16:02:43.959131   69546 network_create.go:289] output of [docker network inspect force-systemd-flag-942000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-942000 not found
	
	** /stderr **
	I0731 16:02:43.959285   69546 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 16:02:43.978685   69546 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 16:02:43.980088   69546 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 16:02:43.981614   69546 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 16:02:43.982915   69546 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 16:02:43.984262   69546 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 16:02:43.984601   69546 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000684f20}
	I0731 16:02:43.984616   69546 network_create.go:124] attempt to create docker network force-systemd-flag-942000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0731 16:02:43.984689   69546 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-942000 force-systemd-flag-942000
	I0731 16:02:44.047809   69546 network_create.go:108] docker network force-systemd-flag-942000 192.168.94.0/24 created
	I0731 16:02:44.047857   69546 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-942000" container
	I0731 16:02:44.047993   69546 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 16:02:44.067542   69546 cli_runner.go:164] Run: docker volume create force-systemd-flag-942000 --label name.minikube.sigs.k8s.io=force-systemd-flag-942000 --label created_by.minikube.sigs.k8s.io=true
	I0731 16:02:44.084763   69546 oci.go:103] Successfully created a docker volume force-systemd-flag-942000
	I0731 16:02:44.084880   69546 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-942000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-942000 --entrypoint /usr/bin/test -v force-systemd-flag-942000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 16:02:44.351283   69546 oci.go:107] Successfully prepared a docker volume force-systemd-flag-942000
	I0731 16:02:44.351402   69546 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 16:02:44.351467   69546 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 16:02:44.351633   69546 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-942000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 16:08:43.926579   69546 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 16:08:43.926716   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:43.947361   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:43.947484   69546 retry.go:31] will retry after 153.345738ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:44.103281   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:44.122869   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:44.122965   69546 retry.go:31] will retry after 517.800254ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:44.641860   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:44.661896   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:44.662019   69546 retry.go:31] will retry after 411.074112ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:45.073564   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:45.093567   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:45.093669   69546 retry.go:31] will retry after 614.773575ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:45.710836   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:45.730661   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	W0731 16:08:45.730771   69546 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	
	W0731 16:08:45.730798   69546 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:45.730853   69546 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 16:08:45.730908   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:45.747937   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:45.748036   69546 retry.go:31] will retry after 346.954796ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:46.097401   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:46.116914   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:46.117017   69546 retry.go:31] will retry after 397.744363ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:46.516070   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:46.535845   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:46.535942   69546 retry.go:31] will retry after 390.173886ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:46.928607   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:46.949112   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:46.949211   69546 retry.go:31] will retry after 580.582167ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:47.531466   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:47.551276   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	W0731 16:08:47.551389   69546 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	
	W0731 16:08:47.551406   69546 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:47.551425   69546 start.go:128] duration metric: took 6m3.648399049s to createHost
	I0731 16:08:47.551494   69546 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 16:08:47.551549   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:47.568961   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:47.569071   69546 retry.go:31] will retry after 329.069694ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:47.898695   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:47.918212   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:47.918306   69546 retry.go:31] will retry after 344.543205ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:48.265165   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:48.284431   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:48.284532   69546 retry.go:31] will retry after 579.54924ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:48.866511   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:48.885924   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:48.886024   69546 retry.go:31] will retry after 491.917279ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:49.378252   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:49.397938   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	W0731 16:08:49.398040   69546 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	
	W0731 16:08:49.398062   69546 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:49.398124   69546 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 16:08:49.398195   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:49.415734   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:49.415831   69546 retry.go:31] will retry after 241.194036ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:49.659400   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:49.679099   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:49.679195   69546 retry.go:31] will retry after 518.976286ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:50.200462   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:50.220539   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	I0731 16:08:50.220654   69546 retry.go:31] will retry after 838.009841ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:51.059681   69546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000
	W0731 16:08:51.079151   69546 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000 returned with exit code 1
	W0731 16:08:51.079258   69546 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	
	W0731 16:08:51.079279   69546 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-942000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-942000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	I0731 16:08:51.079297   69546 fix.go:56] duration metric: took 6m26.166940051s for fixHost
	I0731 16:08:51.079305   69546 start.go:83] releasing machines lock for "force-systemd-flag-942000", held for 6m26.166999479s
	W0731 16:08:51.079387   69546 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-942000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-942000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0731 16:08:51.101334   69546 out.go:177] 
	W0731 16:08:51.123051   69546 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0731 16:08:51.123124   69546 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0731 16:08:51.123155   69546 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0731 16:08:51.165918   69546 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-942000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-942000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-942000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (159.420144ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-942000 host status: state: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-942000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-31 16:08:51.382095 -0700 PDT m=+6872.054883795
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-942000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-942000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-942000",
	        "Id": "d26c3396606136e3078a1746ca35801203cfce777efd0dc72977610e52824f6d",
	        "Created": "2024-07-31T23:02:43.998879391Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-942000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-942000 -n force-systemd-flag-942000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-942000 -n force-systemd-flag-942000: exit status 7 (73.852509ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 16:08:51.475524   70186 status.go:249] status error: host: state: unknown state "force-systemd-flag-942000": docker container inspect force-systemd-flag-942000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-942000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-942000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-942000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-942000
--- FAIL: TestForceSystemdFlag (756.18s)

                                                
                                    
x
+
TestForceSystemdEnv (754.89s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-617000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0731 15:44:33.231166   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 15:46:44.239171   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 15:48:41.089267   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 15:49:33.233961   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 15:52:36.338136   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 15:53:41.135715   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 15:54:33.283724   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-617000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m34.184618977s)

                                                
                                                
-- stdout --
	* [force-systemd-env-617000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-617000" primary control-plane node in "force-systemd-env-617000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-617000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:44:12.697691   69261 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:44:12.697871   69261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:44:12.697877   69261 out.go:304] Setting ErrFile to fd 2...
	I0731 15:44:12.697880   69261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:44:12.698046   69261 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:44:12.699441   69261 out.go:298] Setting JSON to false
	I0731 15:44:12.721798   69261 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":22419,"bootTime":1722443433,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0731 15:44:12.721888   69261 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:44:12.743768   69261 out.go:177] * [force-systemd-env-617000] minikube v1.33.1 on Darwin 14.5
	I0731 15:44:12.764578   69261 notify.go:220] Checking for updates...
	I0731 15:44:12.786734   69261 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 15:44:12.807760   69261 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	I0731 15:44:12.828667   69261 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0731 15:44:12.849775   69261 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:44:12.870757   69261 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	I0731 15:44:12.891639   69261 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0731 15:44:12.913477   69261 config.go:182] Loaded profile config "offline-docker-436000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:44:12.913618   69261 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:44:12.937028   69261 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0731 15:44:12.937219   69261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 15:44:13.019329   69261 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:false NGoroutines:182 SystemTime:2024-07-31 22:44:13.010135061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13
-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-
desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-pl
ugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 15:44:13.062071   69261 out.go:177] * Using the docker driver based on user configuration
	I0731 15:44:13.083063   69261 start.go:297] selected driver: docker
	I0731 15:44:13.083083   69261 start.go:901] validating driver "docker" against <nil>
	I0731 15:44:13.083097   69261 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:44:13.087362   69261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 15:44:13.167111   69261 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:false NGoroutines:182 SystemTime:2024-07-31 22:44:13.15823602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 15:44:13.167321   69261 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 15:44:13.167499   69261 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 15:44:13.189868   69261 out.go:177] * Using Docker Desktop driver with root privileges
	I0731 15:44:13.210671   69261 cni.go:84] Creating CNI manager for ""
	I0731 15:44:13.210694   69261 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 15:44:13.210703   69261 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 15:44:13.210758   69261 start.go:340] cluster config:
	{Name:force-systemd-env-617000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:44:13.231496   69261 out.go:177] * Starting "force-systemd-env-617000" primary control-plane node in "force-systemd-env-617000" cluster
	I0731 15:44:13.273714   69261 cache.go:121] Beginning downloading kic base image for docker with docker
	I0731 15:44:13.294551   69261 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0731 15:44:13.336883   69261 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:44:13.336928   69261 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 15:44:13.336975   69261 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 15:44:13.337000   69261 cache.go:56] Caching tarball of preloaded images
	I0731 15:44:13.337244   69261 preload.go:172] Found /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 15:44:13.337262   69261 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:44:13.338211   69261 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/force-systemd-env-617000/config.json ...
	I0731 15:44:13.338373   69261 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/force-systemd-env-617000/config.json: {Name:mkc34494b56c0641e4689a48e557f998eca10d77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0731 15:44:13.363492   69261 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0731 15:44:13.363503   69261 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 15:44:13.363629   69261 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 15:44:13.363647   69261 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 15:44:13.363653   69261 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 15:44:13.363660   69261 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 15:44:13.363665   69261 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0731 15:44:13.366892   69261 image.go:273] response: 
	I0731 15:44:13.494643   69261 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0731 15:44:13.494692   69261 cache.go:194] Successfully downloaded all kic artifacts
	I0731 15:44:13.494740   69261 start.go:360] acquireMachinesLock for force-systemd-env-617000: {Name:mk5491f284ef1afc902dfbc7dc687450839dd66e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:44:13.494913   69261 start.go:364] duration metric: took 161.451µs to acquireMachinesLock for "force-systemd-env-617000"
	I0731 15:44:13.494941   69261 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-617000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-617000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 15:44:13.495001   69261 start.go:125] createHost starting for "" (driver="docker")
	I0731 15:44:13.537813   69261 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0731 15:44:13.538020   69261 start.go:159] libmachine.API.Create for "force-systemd-env-617000" (driver="docker")
	I0731 15:44:13.538047   69261 client.go:168] LocalClient.Create starting
	I0731 15:44:13.538148   69261 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 15:44:13.538197   69261 main.go:141] libmachine: Decoding PEM data...
	I0731 15:44:13.538213   69261 main.go:141] libmachine: Parsing certificate...
	I0731 15:44:13.538264   69261 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 15:44:13.538304   69261 main.go:141] libmachine: Decoding PEM data...
	I0731 15:44:13.538312   69261 main.go:141] libmachine: Parsing certificate...
	I0731 15:44:13.538814   69261 cli_runner.go:164] Run: docker network inspect force-systemd-env-617000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 15:44:13.556436   69261 cli_runner.go:211] docker network inspect force-systemd-env-617000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 15:44:13.556556   69261 network_create.go:284] running [docker network inspect force-systemd-env-617000] to gather additional debugging logs...
	I0731 15:44:13.556571   69261 cli_runner.go:164] Run: docker network inspect force-systemd-env-617000
	W0731 15:44:13.574046   69261 cli_runner.go:211] docker network inspect force-systemd-env-617000 returned with exit code 1
	I0731 15:44:13.574076   69261 network_create.go:287] error running [docker network inspect force-systemd-env-617000]: docker network inspect force-systemd-env-617000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-617000 not found
	I0731 15:44:13.574095   69261 network_create.go:289] output of [docker network inspect force-systemd-env-617000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-617000 not found
	
	** /stderr **
	I0731 15:44:13.574238   69261 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:44:13.592981   69261 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:44:13.594510   69261 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:44:13.596088   69261 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:44:13.597585   69261 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:44:13.597926   69261 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00178c4a0}
	I0731 15:44:13.597944   69261 network_create.go:124] attempt to create docker network force-systemd-env-617000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0731 15:44:13.598023   69261 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-617000 force-systemd-env-617000
	I0731 15:44:13.661967   69261 network_create.go:108] docker network force-systemd-env-617000 192.168.85.0/24 created
	I0731 15:44:13.662008   69261 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-617000" container
	I0731 15:44:13.662124   69261 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 15:44:13.681410   69261 cli_runner.go:164] Run: docker volume create force-systemd-env-617000 --label name.minikube.sigs.k8s.io=force-systemd-env-617000 --label created_by.minikube.sigs.k8s.io=true
	I0731 15:44:13.700459   69261 oci.go:103] Successfully created a docker volume force-systemd-env-617000
	I0731 15:44:13.700580   69261 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-617000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-617000 --entrypoint /usr/bin/test -v force-systemd-env-617000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 15:44:14.112849   69261 oci.go:107] Successfully prepared a docker volume force-systemd-env-617000
	I0731 15:44:14.112893   69261 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:44:14.112906   69261 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 15:44:14.113039   69261 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-617000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 15:50:13.544531   69261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:50:13.544681   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:50:13.564612   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:50:13.564775   69261 retry.go:31] will retry after 164.38124ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:13.730441   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:50:13.750033   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:50:13.750129   69261 retry.go:31] will retry after 298.521015ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:14.049480   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:50:14.070137   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:50:14.070253   69261 retry.go:31] will retry after 689.886148ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:14.762555   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:50:14.782524   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	W0731 15:50:14.782663   69261 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	
	W0731 15:50:14.782684   69261 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:14.782752   69261 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:50:14.782805   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:50:14.800257   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:50:14.800346   69261 retry.go:31] will retry after 259.887091ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:15.061937   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:50:15.081850   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:50:15.081943   69261 retry.go:31] will retry after 429.971611ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:15.513573   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:50:15.533276   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:50:15.533377   69261 retry.go:31] will retry after 634.365833ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:16.168588   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:50:16.188844   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	W0731 15:50:16.188963   69261 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	
	W0731 15:50:16.188986   69261 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:16.188998   69261 start.go:128] duration metric: took 6m2.689029114s to createHost
	I0731 15:50:16.189010   69261 start.go:83] releasing machines lock for "force-systemd-env-617000", held for 6m2.689133387s
	W0731 15:50:16.189025   69261 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0731 15:50:16.189476   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:16.207659   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	I0731 15:50:16.207711   69261 delete.go:82] Unable to get host status for force-systemd-env-617000, assuming it has already been deleted: state: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	W0731 15:50:16.207789   69261 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0731 15:50:16.207800   69261 start.go:729] Will try again in 5 seconds ...
	I0731 15:50:21.210082   69261 start.go:360] acquireMachinesLock for force-systemd-env-617000: {Name:mk5491f284ef1afc902dfbc7dc687450839dd66e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:50:21.210288   69261 start.go:364] duration metric: took 163.174µs to acquireMachinesLock for "force-systemd-env-617000"
	I0731 15:50:21.210330   69261 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:50:21.210349   69261 fix.go:54] fixHost starting: 
	I0731 15:50:21.210751   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:21.230183   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	I0731 15:50:21.230233   69261 fix.go:112] recreateIfNeeded on force-systemd-env-617000: state= err=unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:21.230258   69261 fix.go:117] machineExists: false. err=machine does not exist
	I0731 15:50:21.252081   69261 out.go:177] * docker "force-systemd-env-617000" container is missing, will recreate.
	I0731 15:50:21.294920   69261 delete.go:124] DEMOLISHING force-systemd-env-617000 ...
	I0731 15:50:21.295140   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:21.315172   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	W0731 15:50:21.315232   69261 stop.go:83] unable to get state: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:21.315255   69261 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:21.315646   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:21.332761   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	I0731 15:50:21.332826   69261 delete.go:82] Unable to get host status for force-systemd-env-617000, assuming it has already been deleted: state: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:21.332914   69261 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-617000
	W0731 15:50:21.350031   69261 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-617000 returned with exit code 1
	I0731 15:50:21.350076   69261 kic.go:371] could not find the container force-systemd-env-617000 to remove it. will try anyways
	I0731 15:50:21.350159   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:21.367150   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	W0731 15:50:21.367201   69261 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:21.367293   69261 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-617000 /bin/bash -c "sudo init 0"
	W0731 15:50:21.384271   69261 cli_runner.go:211] docker exec --privileged -t force-systemd-env-617000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0731 15:50:21.384303   69261 oci.go:650] error shutdown force-systemd-env-617000: docker exec --privileged -t force-systemd-env-617000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:22.388667   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:22.408063   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	I0731 15:50:22.408113   69261 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:22.408124   69261 oci.go:664] temporary error: container force-systemd-env-617000 status is  but expect it to be exited
	I0731 15:50:22.408149   69261 retry.go:31] will retry after 563.442741ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:22.974306   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:22.994146   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	I0731 15:50:22.994208   69261 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:22.994216   69261 oci.go:664] temporary error: container force-systemd-env-617000 status is  but expect it to be exited
	I0731 15:50:22.994241   69261 retry.go:31] will retry after 943.158795ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:23.940688   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:23.960503   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	I0731 15:50:23.960556   69261 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:23.960569   69261 oci.go:664] temporary error: container force-systemd-env-617000 status is  but expect it to be exited
	I0731 15:50:23.960610   69261 retry.go:31] will retry after 1.518006231s: couldn't verify container is exited. %v: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:25.484025   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:25.503837   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	I0731 15:50:25.503887   69261 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:25.503900   69261 oci.go:664] temporary error: container force-systemd-env-617000 status is  but expect it to be exited
	I0731 15:50:25.503924   69261 retry.go:31] will retry after 977.73311ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:26.486333   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:26.506286   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	I0731 15:50:26.506338   69261 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:26.506346   69261 oci.go:664] temporary error: container force-systemd-env-617000 status is  but expect it to be exited
	I0731 15:50:26.506370   69261 retry.go:31] will retry after 2.708540903s: couldn't verify container is exited. %v: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:29.220952   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:29.240803   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	I0731 15:50:29.240860   69261 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:29.240869   69261 oci.go:664] temporary error: container force-systemd-env-617000 status is  but expect it to be exited
	I0731 15:50:29.240893   69261 retry.go:31] will retry after 3.374691594s: couldn't verify container is exited. %v: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:32.623386   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:32.643873   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	I0731 15:50:32.643924   69261 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:32.643933   69261 oci.go:664] temporary error: container force-systemd-env-617000 status is  but expect it to be exited
	I0731 15:50:32.643957   69261 retry.go:31] will retry after 6.47020685s: couldn't verify container is exited. %v: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:39.123547   69261 cli_runner.go:164] Run: docker container inspect force-systemd-env-617000 --format={{.State.Status}}
	W0731 15:50:39.143847   69261 cli_runner.go:211] docker container inspect force-systemd-env-617000 --format={{.State.Status}} returned with exit code 1
	I0731 15:50:39.143896   69261 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:50:39.143905   69261 oci.go:664] temporary error: container force-systemd-env-617000 status is  but expect it to be exited
	I0731 15:50:39.143938   69261 oci.go:88] couldn't shut down force-systemd-env-617000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	 
	I0731 15:50:39.144023   69261 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-617000
	I0731 15:50:39.161745   69261 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-617000
	W0731 15:50:39.179025   69261 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-617000 returned with exit code 1
	I0731 15:50:39.179143   69261 cli_runner.go:164] Run: docker network inspect force-systemd-env-617000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:50:39.196749   69261 cli_runner.go:164] Run: docker network rm force-systemd-env-617000
	I0731 15:50:39.276744   69261 fix.go:124] Sleeping 1 second for extra luck!
	I0731 15:50:40.279200   69261 start.go:125] createHost starting for "" (driver="docker")
	I0731 15:50:40.302532   69261 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0731 15:50:40.302701   69261 start.go:159] libmachine.API.Create for "force-systemd-env-617000" (driver="docker")
	I0731 15:50:40.302736   69261 client.go:168] LocalClient.Create starting
	I0731 15:50:40.302973   69261 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 15:50:40.303070   69261 main.go:141] libmachine: Decoding PEM data...
	I0731 15:50:40.303102   69261 main.go:141] libmachine: Parsing certificate...
	I0731 15:50:40.303186   69261 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 15:50:40.303266   69261 main.go:141] libmachine: Decoding PEM data...
	I0731 15:50:40.303283   69261 main.go:141] libmachine: Parsing certificate...
	I0731 15:50:40.304309   69261 cli_runner.go:164] Run: docker network inspect force-systemd-env-617000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 15:50:40.322917   69261 cli_runner.go:211] docker network inspect force-systemd-env-617000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 15:50:40.323023   69261 network_create.go:284] running [docker network inspect force-systemd-env-617000] to gather additional debugging logs...
	I0731 15:50:40.323040   69261 cli_runner.go:164] Run: docker network inspect force-systemd-env-617000
	W0731 15:50:40.340934   69261 cli_runner.go:211] docker network inspect force-systemd-env-617000 returned with exit code 1
	I0731 15:50:40.340965   69261 network_create.go:287] error running [docker network inspect force-systemd-env-617000]: docker network inspect force-systemd-env-617000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-617000 not found
	I0731 15:50:40.340976   69261 network_create.go:289] output of [docker network inspect force-systemd-env-617000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-617000 not found
	
	** /stderr **
	I0731 15:50:40.341135   69261 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:50:40.360523   69261 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:50:40.362116   69261 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:50:40.363809   69261 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:50:40.365434   69261 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:50:40.367136   69261 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:50:40.368948   69261 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:50:40.369534   69261 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00161f410}
	I0731 15:50:40.369551   69261 network_create.go:124] attempt to create docker network force-systemd-env-617000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0731 15:50:40.369644   69261 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-617000 force-systemd-env-617000
	I0731 15:50:40.434048   69261 network_create.go:108] docker network force-systemd-env-617000 192.168.103.0/24 created
	I0731 15:50:40.434089   69261 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-env-617000" container
	I0731 15:50:40.434190   69261 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 15:50:40.453222   69261 cli_runner.go:164] Run: docker volume create force-systemd-env-617000 --label name.minikube.sigs.k8s.io=force-systemd-env-617000 --label created_by.minikube.sigs.k8s.io=true
	I0731 15:50:40.470482   69261 oci.go:103] Successfully created a docker volume force-systemd-env-617000
	I0731 15:50:40.470601   69261 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-617000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-617000 --entrypoint /usr/bin/test -v force-systemd-env-617000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 15:50:40.748627   69261 oci.go:107] Successfully prepared a docker volume force-systemd-env-617000
	I0731 15:50:40.748672   69261 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:50:40.748692   69261 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 15:50:40.748831   69261 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-617000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 15:56:40.321783   69261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:56:40.321916   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:40.342621   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:40.342733   69261 retry.go:31] will retry after 283.436432ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:40.626536   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:40.646182   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:40.646300   69261 retry.go:31] will retry after 561.139113ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:41.208992   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:41.228919   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:41.229018   69261 retry.go:31] will retry after 476.741949ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:41.708210   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:41.727226   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	W0731 15:56:41.727335   69261 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	
	W0731 15:56:41.727357   69261 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:41.727419   69261 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:56:41.727475   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:41.745073   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:41.745167   69261 retry.go:31] will retry after 211.429144ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:41.958936   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:41.978196   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:41.978302   69261 retry.go:31] will retry after 353.112508ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:42.333869   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:42.353896   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:42.353996   69261 retry.go:31] will retry after 744.088214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:43.098834   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:43.118980   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:43.119080   69261 retry.go:31] will retry after 576.040025ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:43.696358   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:43.716054   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	W0731 15:56:43.716173   69261 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	
	W0731 15:56:43.716188   69261 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:43.716198   69261 start.go:128] duration metric: took 6m3.42028302s to createHost
	I0731 15:56:43.716271   69261 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:56:43.716328   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:43.734257   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:43.734351   69261 retry.go:31] will retry after 369.688849ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:44.104446   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:44.124332   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:44.124425   69261 retry.go:31] will retry after 300.902079ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:44.426199   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:44.445541   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:44.445635   69261 retry.go:31] will retry after 699.30854ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:45.147345   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:45.167018   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:45.167116   69261 retry.go:31] will retry after 447.912603ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:45.617503   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:45.637213   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	W0731 15:56:45.637325   69261 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	
	W0731 15:56:45.637341   69261 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:45.637402   69261 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:56:45.637456   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:45.654306   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:45.654400   69261 retry.go:31] will retry after 287.605674ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:45.942659   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:45.962178   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:45.962289   69261 retry.go:31] will retry after 229.14404ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:46.193754   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:46.213338   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	I0731 15:56:46.213435   69261 retry.go:31] will retry after 501.573489ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:46.716200   69261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000
	W0731 15:56:46.736495   69261 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000 returned with exit code 1
	W0731 15:56:46.736590   69261 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	
	W0731 15:56:46.736609   69261 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-617000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-617000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	I0731 15:56:46.736619   69261 fix.go:56] duration metric: took 6m25.477161658s for fixHost
	I0731 15:56:46.736629   69261 start.go:83] releasing machines lock for "force-systemd-env-617000", held for 6m25.477215711s
	W0731 15:56:46.736712   69261 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-617000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-617000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0731 15:56:46.778963   69261 out.go:177] 
	W0731 15:56:46.800204   69261 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0731 15:56:46.800271   69261 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0731 15:56:46.800301   69261 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0731 15:56:46.821219   69261 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-617000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-617000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-617000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (161.545572ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-617000 host status: state: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-617000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-31 15:56:47.057892 -0700 PDT m=+6147.737048501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-617000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-617000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-617000",
	        "Id": "a1aae444cca3dc71aabf75feddd4468b158e1704b50ff219654baa96139a41d1",
	        "Created": "2024-07-31T22:50:40.365713835Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-617000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-617000 -n force-systemd-env-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-617000 -n force-systemd-env-617000: exit status 7 (73.594377ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:56:47.150884   69609 status.go:249] status error: host: state: unknown state "force-systemd-env-617000": docker container inspect force-systemd-env-617000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-617000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-617000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-617000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-617000
--- FAIL: TestForceSystemdEnv (754.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (893.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-455000 ssh -- ls /minikube-host
E0731 14:43:40.843141   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:44:32.988325   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:45:56.041506   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:48:40.847414   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:49:32.993456   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:53:40.880930   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:54:33.026693   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-1-455000 ssh -- ls /minikube-host: signal: killed (14m52.958509845s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-1-455000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-455000
helpers_test.go:235: (dbg) docker inspect mount-start-1-455000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "af25aaaa2bf4dcfb01808320b4bc37bde04d513f364a62a79b1d2bd91b74e2f4",
	        "Created": "2024-07-31T21:40:14.851938624Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1613974,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-31T21:40:14.947412346Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f7a7de1851ee150766e4477ba0f200b8a850318ef537b8ef6899afcaea59940a",
	        "ResolvConfPath": "/var/lib/docker/containers/af25aaaa2bf4dcfb01808320b4bc37bde04d513f364a62a79b1d2bd91b74e2f4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/af25aaaa2bf4dcfb01808320b4bc37bde04d513f364a62a79b1d2bd91b74e2f4/hostname",
	        "HostsPath": "/var/lib/docker/containers/af25aaaa2bf4dcfb01808320b4bc37bde04d513f364a62a79b1d2bd91b74e2f4/hosts",
	        "LogPath": "/var/lib/docker/containers/af25aaaa2bf4dcfb01808320b4bc37bde04d513f364a62a79b1d2bd91b74e2f4/af25aaaa2bf4dcfb01808320b4bc37bde04d513f364a62a79b1d2bd91b74e2f4-json.log",
	        "Name": "/mount-start-1-455000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-1-455000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-1-455000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6f847b9488cf71729216c9fa4b955034aa38b30adc97bbfb523de6e3dcab0e33-init/diff:/var/lib/docker/overlay2/207d03726c50b3fe34f89ddf93a3d72e479aa9574ae2e6c4741b3c63831e6ed7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6f847b9488cf71729216c9fa4b955034aa38b30adc97bbfb523de6e3dcab0e33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6f847b9488cf71729216c9fa4b955034aa38b30adc97bbfb523de6e3dcab0e33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6f847b9488cf71729216c9fa4b955034aa38b30adc97bbfb523de6e3dcab0e33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-1-455000",
	                "Source": "/var/lib/docker/volumes/mount-start-1-455000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-1-455000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-1-455000",
	                "name.minikube.sigs.k8s.io": "mount-start-1-455000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "487ab893cce0a2bfe700dcef1b140f7b26b5ac2d7adcd923f8463077c1c8ba2d",
	            "SandboxKey": "/var/run/docker/netns/487ab893cce0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62110"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62111"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62112"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-1-455000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f074d314d1102bccb0412ebb23bafcc20897207ffc21848c4cb1fc46abf7feac",
	                    "EndpointID": "ba76dced02f466ed76a062654e02c83b46236d3f46cf64adfab276e6d91d85d3",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "mount-start-1-455000",
	                        "af25aaaa2bf4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-455000 -n mount-start-1-455000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-455000 -n mount-start-1-455000: exit status 6 (245.289031ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 14:55:13.463690   67025 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-1-455000" does not appear in /Users/jenkins/minikube-integration/19360-61501/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-1-455000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountFirst (893.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (750.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-311000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0731 14:56:44.031932   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:58:40.882899   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:59:33.028131   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 15:02:36.082721   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 15:03:40.884830   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 15:04:33.030947   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 15:08:41.016706   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-311000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m30.403769454s)

                                                
                                                
-- stdout --
	* [multinode-311000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-311000" primary control-plane node in "multinode-311000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-311000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:56:21.162565   67081 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:56:21.162845   67081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:56:21.162851   67081 out.go:304] Setting ErrFile to fd 2...
	I0731 14:56:21.162855   67081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:56:21.163027   67081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 14:56:21.164500   67081 out.go:298] Setting JSON to false
	I0731 14:56:21.186787   67081 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":19548,"bootTime":1722443433,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0731 14:56:21.186870   67081 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:56:21.209146   67081 out.go:177] * [multinode-311000] minikube v1.33.1 on Darwin 14.5
	I0731 14:56:21.251973   67081 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 14:56:21.252013   67081 notify.go:220] Checking for updates...
	I0731 14:56:21.294706   67081 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	I0731 14:56:21.316900   67081 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0731 14:56:21.338549   67081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:56:21.359892   67081 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	I0731 14:56:21.381887   67081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 14:56:21.404160   67081 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:56:21.428636   67081 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0731 14:56:21.428924   67081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 14:56:21.507512   67081 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:74 SystemTime:2024-07-31 21:56:21.498220274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 14:56:21.530026   67081 out.go:177] * Using the docker driver based on user configuration
	I0731 14:56:21.572555   67081 start.go:297] selected driver: docker
	I0731 14:56:21.572583   67081 start.go:901] validating driver "docker" against <nil>
	I0731 14:56:21.572598   67081 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 14:56:21.576970   67081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 14:56:21.653152   67081 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:74 SystemTime:2024-07-31 21:56:21.644221506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 14:56:21.653341   67081 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 14:56:21.653528   67081 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 14:56:21.675619   67081 out.go:177] * Using Docker Desktop driver with root privileges
	I0731 14:56:21.697469   67081 cni.go:84] Creating CNI manager for ""
	I0731 14:56:21.697502   67081 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 14:56:21.697514   67081 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 14:56:21.697691   67081 start.go:340] cluster config:
	{Name:multinode-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:56:21.719535   67081 out.go:177] * Starting "multinode-311000" primary control-plane node in "multinode-311000" cluster
	I0731 14:56:21.761652   67081 cache.go:121] Beginning downloading kic base image for docker with docker
	I0731 14:56:21.783669   67081 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0731 14:56:21.825758   67081 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 14:56:21.825804   67081 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 14:56:21.825843   67081 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 14:56:21.825863   67081 cache.go:56] Caching tarball of preloaded images
	I0731 14:56:21.826090   67081 preload.go:172] Found /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 14:56:21.826109   67081 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 14:56:21.827713   67081 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/multinode-311000/config.json ...
	I0731 14:56:21.827853   67081 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/multinode-311000/config.json: {Name:mk7c75833536aa8da127d5416879386e82bc244f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0731 14:56:21.851417   67081 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0731 14:56:21.851429   67081 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 14:56:21.851579   67081 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 14:56:21.851607   67081 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 14:56:21.851617   67081 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 14:56:21.851626   67081 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 14:56:21.851631   67081 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0731 14:56:21.854660   67081 image.go:273] response: 
	I0731 14:56:21.985191   67081 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0731 14:56:21.985255   67081 cache.go:194] Successfully downloaded all kic artifacts
	I0731 14:56:21.985302   67081 start.go:360] acquireMachinesLock for multinode-311000: {Name:mk7981435695037af8cd786e9a728446a653cd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 14:56:21.985483   67081 start.go:364] duration metric: took 168.185µs to acquireMachinesLock for "multinode-311000"
	I0731 14:56:21.985512   67081 start.go:93] Provisioning new machine with config: &{Name:multinode-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-311000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 14:56:21.985570   67081 start.go:125] createHost starting for "" (driver="docker")
	I0731 14:56:22.030903   67081 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 14:56:22.031108   67081 start.go:159] libmachine.API.Create for "multinode-311000" (driver="docker")
	I0731 14:56:22.031136   67081 client.go:168] LocalClient.Create starting
	I0731 14:56:22.031243   67081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 14:56:22.031295   67081 main.go:141] libmachine: Decoding PEM data...
	I0731 14:56:22.031312   67081 main.go:141] libmachine: Parsing certificate...
	I0731 14:56:22.031368   67081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 14:56:22.031418   67081 main.go:141] libmachine: Decoding PEM data...
	I0731 14:56:22.031428   67081 main.go:141] libmachine: Parsing certificate...
	I0731 14:56:22.031904   67081 cli_runner.go:164] Run: docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 14:56:22.050150   67081 cli_runner.go:211] docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 14:56:22.050256   67081 network_create.go:284] running [docker network inspect multinode-311000] to gather additional debugging logs...
	I0731 14:56:22.050274   67081 cli_runner.go:164] Run: docker network inspect multinode-311000
	W0731 14:56:22.067410   67081 cli_runner.go:211] docker network inspect multinode-311000 returned with exit code 1
	I0731 14:56:22.067436   67081 network_create.go:287] error running [docker network inspect multinode-311000]: docker network inspect multinode-311000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-311000 not found
	I0731 14:56:22.067446   67081 network_create.go:289] output of [docker network inspect multinode-311000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-311000 not found
	
	** /stderr **
	I0731 14:56:22.067639   67081 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 14:56:22.087143   67081 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 14:56:22.088700   67081 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 14:56:22.089067   67081 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014fcb40}
	I0731 14:56:22.089085   67081 network_create.go:124] attempt to create docker network multinode-311000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0731 14:56:22.089155   67081 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000
	W0731 14:56:22.107296   67081 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000 returned with exit code 1
	W0731 14:56:22.107330   67081 network_create.go:149] failed to create docker network multinode-311000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0731 14:56:22.107354   67081 network_create.go:116] failed to create docker network multinode-311000 192.168.67.0/24, will retry: subnet is taken
	I0731 14:56:22.108766   67081 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 14:56:22.109148   67081 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001515cb0}
	I0731 14:56:22.109163   67081 network_create.go:124] attempt to create docker network multinode-311000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0731 14:56:22.109234   67081 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000
	I0731 14:56:22.187255   67081 network_create.go:108] docker network multinode-311000 192.168.76.0/24 created
	I0731 14:56:22.187300   67081 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-311000" container
	I0731 14:56:22.187464   67081 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 14:56:22.212580   67081 cli_runner.go:164] Run: docker volume create multinode-311000 --label name.minikube.sigs.k8s.io=multinode-311000 --label created_by.minikube.sigs.k8s.io=true
	I0731 14:56:22.238131   67081 oci.go:103] Successfully created a docker volume multinode-311000
	I0731 14:56:22.238289   67081 cli_runner.go:164] Run: docker run --rm --name multinode-311000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-311000 --entrypoint /usr/bin/test -v multinode-311000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 14:56:22.677330   67081 oci.go:107] Successfully prepared a docker volume multinode-311000
	I0731 14:56:22.677390   67081 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 14:56:22.677428   67081 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 14:56:22.677577   67081 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-311000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 15:02:22.035407   67081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:02:22.035549   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:02:22.055243   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:02:22.055373   67081 retry.go:31] will retry after 269.577724ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:22.325758   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:02:22.345379   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:02:22.345489   67081 retry.go:31] will retry after 288.054804ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:22.634915   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:02:22.654853   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:02:22.654947   67081 retry.go:31] will retry after 419.495566ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:23.076052   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:02:23.096135   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:02:23.096234   67081 retry.go:31] will retry after 609.341636ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:23.708036   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:02:23.727460   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:02:23.727577   67081 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:02:23.727595   67081 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:23.727663   67081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:02:23.727724   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:02:23.745356   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:02:23.745445   67081 retry.go:31] will retry after 175.936968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:23.922699   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:02:23.942834   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:02:23.942930   67081 retry.go:31] will retry after 282.569565ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:24.227852   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:02:24.246376   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:02:24.246480   67081 retry.go:31] will retry after 830.216947ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:25.076964   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:02:25.094548   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:02:25.094657   67081 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:02:25.094673   67081 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:25.094701   67081 start.go:128] duration metric: took 6m3.105502116s to createHost
	I0731 15:02:25.094708   67081 start.go:83] releasing machines lock for "multinode-311000", held for 6m3.105604053s
	W0731 15:02:25.094724   67081 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0731 15:02:25.095152   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:25.112338   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:02:25.112384   67081 delete.go:82] Unable to get host status for multinode-311000, assuming it has already been deleted: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	W0731 15:02:25.112469   67081 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0731 15:02:25.112479   67081 start.go:729] Will try again in 5 seconds ...
	I0731 15:02:30.114011   67081 start.go:360] acquireMachinesLock for multinode-311000: {Name:mk7981435695037af8cd786e9a728446a653cd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:02:30.115010   67081 start.go:364] duration metric: took 950.444µs to acquireMachinesLock for "multinode-311000"
	I0731 15:02:30.115077   67081 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:02:30.115097   67081 fix.go:54] fixHost starting: 
	I0731 15:02:30.115588   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:30.134843   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:02:30.134885   67081 fix.go:112] recreateIfNeeded on multinode-311000: state= err=unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:30.134903   67081 fix.go:117] machineExists: false. err=machine does not exist
	I0731 15:02:30.157353   67081 out.go:177] * docker "multinode-311000" container is missing, will recreate.
	I0731 15:02:30.199337   67081 delete.go:124] DEMOLISHING multinode-311000 ...
	I0731 15:02:30.199524   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:30.219157   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	W0731 15:02:30.219218   67081 stop.go:83] unable to get state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:30.219239   67081 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:30.219607   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:30.236473   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:02:30.236542   67081 delete.go:82] Unable to get host status for multinode-311000, assuming it has already been deleted: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:30.236630   67081 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-311000
	W0731 15:02:30.253577   67081 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-311000 returned with exit code 1
	I0731 15:02:30.253610   67081 kic.go:371] could not find the container multinode-311000 to remove it. will try anyways
	I0731 15:02:30.253692   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:30.270600   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	W0731 15:02:30.270659   67081 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:30.270747   67081 cli_runner.go:164] Run: docker exec --privileged -t multinode-311000 /bin/bash -c "sudo init 0"
	W0731 15:02:30.287320   67081 cli_runner.go:211] docker exec --privileged -t multinode-311000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0731 15:02:30.287349   67081 oci.go:650] error shutdown multinode-311000: docker exec --privileged -t multinode-311000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:31.287774   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:31.306693   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:02:31.306738   67081 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:31.306750   67081 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:02:31.306771   67081 retry.go:31] will retry after 367.367822ms: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:31.676077   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:31.695842   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:02:31.695887   67081 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:31.695902   67081 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:02:31.695928   67081 retry.go:31] will retry after 455.036779ms: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:32.152454   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:32.173019   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:02:32.173062   67081 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:32.173073   67081 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:02:32.173100   67081 retry.go:31] will retry after 627.747405ms: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:32.803253   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:32.822514   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:02:32.822562   67081 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:32.822577   67081 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:02:32.822601   67081 retry.go:31] will retry after 1.843131565s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:34.667488   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:34.687102   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:02:34.687149   67081 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:34.687161   67081 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:02:34.687184   67081 retry.go:31] will retry after 1.600209973s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:36.289193   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:36.309585   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:02:36.309627   67081 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:36.309636   67081 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:02:36.309660   67081 retry.go:31] will retry after 3.126275014s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:39.438434   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:39.458993   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:02:39.459036   67081 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:39.459048   67081 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:02:39.459074   67081 retry.go:31] will retry after 5.201147038s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:44.660524   67081 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:02:44.679772   67081 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:02:44.679815   67081 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:02:44.679825   67081 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:02:44.679868   67081 oci.go:88] couldn't shut down multinode-311000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	 
	I0731 15:02:44.679944   67081 cli_runner.go:164] Run: docker rm -f -v multinode-311000
	I0731 15:02:44.697624   67081 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-311000
	W0731 15:02:44.715134   67081 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-311000 returned with exit code 1
	I0731 15:02:44.715255   67081 cli_runner.go:164] Run: docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:02:44.732478   67081 cli_runner.go:164] Run: docker network rm multinode-311000
	I0731 15:02:44.816180   67081 fix.go:124] Sleeping 1 second for extra luck!
	I0731 15:02:45.818244   67081 start.go:125] createHost starting for "" (driver="docker")
	I0731 15:02:45.841429   67081 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 15:02:45.841610   67081 start.go:159] libmachine.API.Create for "multinode-311000" (driver="docker")
	I0731 15:02:45.841636   67081 client.go:168] LocalClient.Create starting
	I0731 15:02:45.841874   67081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 15:02:45.841973   67081 main.go:141] libmachine: Decoding PEM data...
	I0731 15:02:45.841998   67081 main.go:141] libmachine: Parsing certificate...
	I0731 15:02:45.842075   67081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 15:02:45.842154   67081 main.go:141] libmachine: Decoding PEM data...
	I0731 15:02:45.842168   67081 main.go:141] libmachine: Parsing certificate...
	I0731 15:02:45.843512   67081 cli_runner.go:164] Run: docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 15:02:45.862305   67081 cli_runner.go:211] docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 15:02:45.862400   67081 network_create.go:284] running [docker network inspect multinode-311000] to gather additional debugging logs...
	I0731 15:02:45.862418   67081 cli_runner.go:164] Run: docker network inspect multinode-311000
	W0731 15:02:45.879750   67081 cli_runner.go:211] docker network inspect multinode-311000 returned with exit code 1
	I0731 15:02:45.879777   67081 network_create.go:287] error running [docker network inspect multinode-311000]: docker network inspect multinode-311000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-311000 not found
	I0731 15:02:45.879789   67081 network_create.go:289] output of [docker network inspect multinode-311000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-311000 not found
	
	** /stderr **
	I0731 15:02:45.879943   67081 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:02:45.899367   67081 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:02:45.900932   67081 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:02:45.902645   67081 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:02:45.904466   67081 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:02:45.905327   67081 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001631c50}
	I0731 15:02:45.905352   67081 network_create.go:124] attempt to create docker network multinode-311000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0731 15:02:45.905485   67081 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000
	I0731 15:02:45.969872   67081 network_create.go:108] docker network multinode-311000 192.168.85.0/24 created
	I0731 15:02:45.969911   67081 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-311000" container
	I0731 15:02:45.970023   67081 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 15:02:45.987414   67081 cli_runner.go:164] Run: docker volume create multinode-311000 --label name.minikube.sigs.k8s.io=multinode-311000 --label created_by.minikube.sigs.k8s.io=true
	I0731 15:02:46.004279   67081 oci.go:103] Successfully created a docker volume multinode-311000
	I0731 15:02:46.004402   67081 cli_runner.go:164] Run: docker run --rm --name multinode-311000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-311000 --entrypoint /usr/bin/test -v multinode-311000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 15:02:46.281842   67081 oci.go:107] Successfully prepared a docker volume multinode-311000
	I0731 15:02:46.281880   67081 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:02:46.281909   67081 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 15:02:46.282022   67081 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-311000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 15:08:45.973578   67081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:08:45.973704   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:45.993740   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:45.993849   67081 retry.go:31] will retry after 343.41383ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:46.339624   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:46.359476   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:46.359588   67081 retry.go:31] will retry after 230.894849ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:46.592886   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:46.612868   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:46.612963   67081 retry.go:31] will retry after 423.069199ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:47.036964   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:47.056120   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:08:47.056252   67081 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:08:47.056271   67081 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:47.056339   67081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:08:47.056391   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:47.073648   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:47.073740   67081 retry.go:31] will retry after 247.784948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:47.322171   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:47.341725   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:47.341819   67081 retry.go:31] will retry after 457.345593ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:47.801593   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:47.821375   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:47.821486   67081 retry.go:31] will retry after 470.269751ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:48.294269   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:48.313600   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:48.313700   67081 retry.go:31] will retry after 616.296673ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:48.930992   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:48.950925   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:08:48.951031   67081 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:08:48.951048   67081 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:48.951061   67081 start.go:128] duration metric: took 6m3.001686409s to createHost
	I0731 15:08:48.951131   67081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:08:48.951184   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:48.968331   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:48.968432   67081 retry.go:31] will retry after 319.613222ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:49.290414   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:49.310876   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:49.310981   67081 retry.go:31] will retry after 408.525819ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:49.721968   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:49.742230   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:49.742324   67081 retry.go:31] will retry after 702.382082ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:50.446033   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:50.465654   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:08:50.465755   67081 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:08:50.465772   67081 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:50.465855   67081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:08:50.465910   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:50.483504   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:50.483598   67081 retry.go:31] will retry after 315.976814ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:50.800192   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:50.819479   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:50.819579   67081 retry.go:31] will retry after 360.831154ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:51.180981   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:51.200789   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:08:51.200892   67081 retry.go:31] will retry after 294.679577ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:51.498049   67081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:08:51.518428   67081 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:08:51.518535   67081 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:08:51.518552   67081 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:08:51.518560   67081 fix.go:56] duration metric: took 6m21.272134228s for fixHost
	I0731 15:08:51.518567   67081 start.go:83] releasing machines lock for "multinode-311000", held for 6m21.272190466s
	W0731 15:08:51.518646   67081 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-311000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-311000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0731 15:08:51.560105   67081 out.go:177] 
	W0731 15:08:51.581067   67081 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0731 15:08:51.581146   67081 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0731 15:08:51.581172   67081 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0731 15:08:51.602244   67081 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-311000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "d8c031d940e4f0273fe250bb5f038320f81a63f0e645b18c628b8178d2d39ae5",
	        "Created": "2024-07-31T22:02:45.921946765Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (76.784902ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:08:51.754160   67584 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (750.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (105.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (103.077291ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-311000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- rollout status deployment/busybox: exit status 1 (101.86917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.470647ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.995986ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.430621ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.16827ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.570921ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.606078ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.239637ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.127717ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0731 15:09:33.162330   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.302204ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.796235ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.342704ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (101.123503ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- exec  -- nslookup kubernetes.io: exit status 1 (100.15152ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- exec  -- nslookup kubernetes.default: exit status 1 (100.821738ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (101.333646ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "d8c031d940e4f0273fe250bb5f038320f81a63f0e645b18c628b8178d2d39ae5",
	        "Created": "2024-07-31T22:02:45.921946765Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (74.860809ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:10:37.560897   67655 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (105.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-311000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (100.582351ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-311000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "d8c031d940e4f0273fe250bb5f038320f81a63f0e645b18c628b8178d2d39ae5",
	        "Created": "2024-07-31T22:02:45.921946765Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (74.430118ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:10:37.756975   67662 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-311000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-311000 -v 3 --alsologtostderr: exit status 80 (162.017749ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:10:37.813162   67665 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:10:37.813442   67665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:37.813448   67665 out.go:304] Setting ErrFile to fd 2...
	I0731 15:10:37.813452   67665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:37.813637   67665 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:10:37.813980   67665 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:10:37.814277   67665 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:10:37.814652   67665 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:10:37.831424   67665 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:10:37.853890   67665 out.go:177] 
	W0731 15:10:37.875334   67665 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-311000 host status: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-311000 host status: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	I0731 15:10:37.896017   67665 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-311000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "d8c031d940e4f0273fe250bb5f038320f81a63f0e645b18c628b8178d2d39ae5",
	        "Created": "2024-07-31T22:02:45.921946765Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (74.165096ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:10:38.014185   67669 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-311000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-311000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (37.086645ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-311000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-311000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-311000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "d8c031d940e4f0273fe250bb5f038320f81a63f0e645b18c628b8178d2d39ae5",
	        "Created": "2024-07-31T22:02:45.921946765Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (75.364571ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:10:38.147769   67674 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-311000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-1-455000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-311000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-311000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-311000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "d8c031d940e4f0273fe250bb5f038320f81a63f0e645b18c628b8178d2d39ae5",
	        "Created": "2024-07-31T22:02:45.921946765Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (74.614369ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:10:38.359300   67682 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status --output json --alsologtostderr: exit status 7 (74.754968ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-311000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:10:38.415092   67685 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:10:38.415358   67685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:38.415364   67685 out.go:304] Setting ErrFile to fd 2...
	I0731 15:10:38.415368   67685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:38.415568   67685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:10:38.415749   67685 out.go:298] Setting JSON to true
	I0731 15:10:38.415770   67685 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:10:38.415808   67685 notify.go:220] Checking for updates...
	I0731 15:10:38.416072   67685 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:10:38.416088   67685 status.go:255] checking status of multinode-311000 ...
	I0731 15:10:38.416481   67685 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:10:38.434093   67685 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:10:38.434154   67685 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:10:38.434172   67685 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:10:38.434192   67685 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:10:38.434200   67685 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-311000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "d8c031d940e4f0273fe250bb5f038320f81a63f0e645b18c628b8178d2d39ae5",
	        "Created": "2024-07-31T22:02:45.921946765Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (74.61726ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:10:38.529557   67689 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 node stop m03: exit status 85 (154.994138ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-311000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status: exit status 7 (75.07683ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:10:38.760186   67694 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:10:38.760197   67694 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status --alsologtostderr: exit status 7 (74.820604ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:10:38.816294   67697 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:10:38.816486   67697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:38.816491   67697 out.go:304] Setting ErrFile to fd 2...
	I0731 15:10:38.816495   67697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:38.816684   67697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:10:38.816868   67697 out.go:298] Setting JSON to false
	I0731 15:10:38.816888   67697 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:10:38.816925   67697 notify.go:220] Checking for updates...
	I0731 15:10:38.817188   67697 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:10:38.817203   67697 status.go:255] checking status of multinode-311000 ...
	I0731 15:10:38.817612   67697 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:10:38.835058   67697 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:10:38.835135   67697 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:10:38.835154   67697 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:10:38.835178   67697 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:10:38.835190   67697 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-311000 status --alsologtostderr": multinode-311000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-311000 status --alsologtostderr": multinode-311000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-311000 status --alsologtostderr": multinode-311000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "d8c031d940e4f0273fe250bb5f038320f81a63f0e645b18c628b8178d2d39ae5",
	        "Created": "2024-07-31T22:02:45.921946765Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (75.04687ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:10:38.931071   67701 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (53.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 node start m03 -v=7 --alsologtostderr: exit status 85 (152.316264ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:10:38.987807   67704 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:10:38.987992   67704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:38.987997   67704 out.go:304] Setting ErrFile to fd 2...
	I0731 15:10:38.988001   67704 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:38.988177   67704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:10:38.988517   67704 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:10:38.988811   67704 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:10:39.011076   67704 out.go:177] 
	W0731 15:10:39.031868   67704 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0731 15:10:39.031892   67704 out.go:239] * 
	* 
	W0731 15:10:39.040823   67704 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 15:10:39.061782   67704 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0731 15:10:38.987807   67704 out.go:291] Setting OutFile to fd 1 ...
I0731 15:10:38.987992   67704 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 15:10:38.987997   67704 out.go:304] Setting ErrFile to fd 2...
I0731 15:10:38.988001   67704 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 15:10:38.988177   67704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
I0731 15:10:38.988517   67704 mustload.go:65] Loading cluster: multinode-311000
I0731 15:10:38.988811   67704 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 15:10:39.011076   67704 out.go:177] 
W0731 15:10:39.031868   67704 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0731 15:10:39.031892   67704 out.go:239] * 
* 
W0731 15:10:39.040823   67704 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 15:10:39.061782   67704 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-311000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr: exit status 7 (75.462756ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:10:39.139667   67706 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:10:39.140458   67706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:39.140467   67706 out.go:304] Setting ErrFile to fd 2...
	I0731 15:10:39.140474   67706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:39.140989   67706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:10:39.141198   67706 out.go:298] Setting JSON to false
	I0731 15:10:39.141221   67706 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:10:39.141253   67706 notify.go:220] Checking for updates...
	I0731 15:10:39.141482   67706 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:10:39.141498   67706 status.go:255] checking status of multinode-311000 ...
	I0731 15:10:39.141872   67706 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:10:39.159186   67706 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:10:39.159257   67706 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:10:39.159283   67706 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:10:39.159303   67706 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:10:39.159321   67706 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr: exit status 7 (80.38257ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:10:40.299059   67709 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:10:40.299814   67709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:40.299823   67709 out.go:304] Setting ErrFile to fd 2...
	I0731 15:10:40.299829   67709 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:40.300427   67709 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:10:40.300612   67709 out.go:298] Setting JSON to false
	I0731 15:10:40.300634   67709 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:10:40.300668   67709 notify.go:220] Checking for updates...
	I0731 15:10:40.300885   67709 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:10:40.300901   67709 status.go:255] checking status of multinode-311000 ...
	I0731 15:10:40.301311   67709 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:10:40.319742   67709 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:10:40.319806   67709 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:10:40.319827   67709 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:10:40.319850   67709 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:10:40.319864   67709 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr: exit status 7 (79.20013ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:10:42.179752   67712 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:10:42.179933   67712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:42.179939   67712 out.go:304] Setting ErrFile to fd 2...
	I0731 15:10:42.179942   67712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:42.180111   67712 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:10:42.180292   67712 out.go:298] Setting JSON to false
	I0731 15:10:42.180315   67712 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:10:42.180350   67712 notify.go:220] Checking for updates...
	I0731 15:10:42.180590   67712 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:10:42.180608   67712 status.go:255] checking status of multinode-311000 ...
	I0731 15:10:42.181015   67712 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:10:42.199507   67712 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:10:42.199577   67712 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:10:42.199597   67712 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:10:42.199624   67712 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:10:42.199632   67712 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr: exit status 7 (79.720691ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:10:44.553032   67717 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:10:44.553229   67717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:44.553235   67717 out.go:304] Setting ErrFile to fd 2...
	I0731 15:10:44.553238   67717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:44.553401   67717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:10:44.553587   67717 out.go:298] Setting JSON to false
	I0731 15:10:44.553615   67717 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:10:44.553650   67717 notify.go:220] Checking for updates...
	I0731 15:10:44.553893   67717 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:10:44.553910   67717 status.go:255] checking status of multinode-311000 ...
	I0731 15:10:44.554313   67717 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:10:44.571583   67717 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:10:44.571640   67717 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:10:44.571662   67717 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:10:44.571685   67717 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:10:44.571691   67717 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr: exit status 7 (79.216692ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:10:46.532814   67720 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:10:46.533080   67720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:46.533086   67720 out.go:304] Setting ErrFile to fd 2...
	I0731 15:10:46.533089   67720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:46.533264   67720 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:10:46.533440   67720 out.go:298] Setting JSON to false
	I0731 15:10:46.533458   67720 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:10:46.533495   67720 notify.go:220] Checking for updates...
	I0731 15:10:46.533761   67720 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:10:46.533777   67720 status.go:255] checking status of multinode-311000 ...
	I0731 15:10:46.534180   67720 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:10:46.551562   67720 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:10:46.551620   67720 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:10:46.551658   67720 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:10:46.551684   67720 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:10:46.551691   67720 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr: exit status 7 (78.6869ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:10:52.772063   67723 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:10:52.772261   67723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:52.772267   67723 out.go:304] Setting ErrFile to fd 2...
	I0731 15:10:52.772271   67723 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:52.772444   67723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:10:52.772629   67723 out.go:298] Setting JSON to false
	I0731 15:10:52.772651   67723 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:10:52.772690   67723 notify.go:220] Checking for updates...
	I0731 15:10:52.772941   67723 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:10:52.772959   67723 status.go:255] checking status of multinode-311000 ...
	I0731 15:10:52.773352   67723 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:10:52.790633   67723 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:10:52.790695   67723 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:10:52.790724   67723 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:10:52.790750   67723 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:10:52.790755   67723 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr: exit status 7 (78.18374ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:10:58.923192   67727 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:10:58.923392   67727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:58.923397   67727 out.go:304] Setting ErrFile to fd 2...
	I0731 15:10:58.923401   67727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:10:58.923618   67727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:10:58.923789   67727 out.go:298] Setting JSON to false
	I0731 15:10:58.923830   67727 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:10:58.923867   67727 notify.go:220] Checking for updates...
	I0731 15:10:58.924101   67727 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:10:58.924120   67727 status.go:255] checking status of multinode-311000 ...
	I0731 15:10:58.924561   67727 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:10:58.943044   67727 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:10:58.943108   67727 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:10:58.943129   67727 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:10:58.943149   67727 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:10:58.943157   67727 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr: exit status 7 (83.116454ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:11:13.238174   67733 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:11:13.239091   67733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:11:13.239147   67733 out.go:304] Setting ErrFile to fd 2...
	I0731 15:11:13.239156   67733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:11:13.239656   67733 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:11:13.239856   67733 out.go:298] Setting JSON to false
	I0731 15:11:13.239878   67733 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:11:13.239913   67733 notify.go:220] Checking for updates...
	I0731 15:11:13.240141   67733 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:11:13.240158   67733 status.go:255] checking status of multinode-311000 ...
	I0731 15:11:13.240542   67733 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:13.257935   67733 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:11:13.258003   67733 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:11:13.258029   67733 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:11:13.258052   67733 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:11:13.258059   67733 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr: exit status 7 (80.977091ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:11:32.124007   67739 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:11:32.124197   67739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:11:32.124203   67739 out.go:304] Setting ErrFile to fd 2...
	I0731 15:11:32.124206   67739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:11:32.124388   67739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:11:32.124573   67739 out.go:298] Setting JSON to false
	I0731 15:11:32.124596   67739 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:11:32.124633   67739 notify.go:220] Checking for updates...
	I0731 15:11:32.124863   67739 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:11:32.124881   67739 status.go:255] checking status of multinode-311000 ...
	I0731 15:11:32.125271   67739 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:32.143704   67739 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:11:32.143761   67739 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:11:32.143782   67739 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:11:32.143805   67739 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:11:32.143811   67739 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-311000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "d8c031d940e4f0273fe250bb5f038320f81a63f0e645b18c628b8178d2d39ae5",
	        "Created": "2024-07-31T22:02:45.921946765Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (74.96531ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:11:32.239378   67743 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (53.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (790.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-311000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-311000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-311000: exit status 82 (13.791261703s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-311000"  ...
	* Stopping node "multinode-311000"  ...
	* Stopping node "multinode-311000"  ...
	* Stopping node "multinode-311000"  ...
	* Stopping node "multinode-311000"  ...
	* Stopping node "multinode-311000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-311000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-311000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-311000 --wait=true -v=8 --alsologtostderr
E0731 15:13:24.174896   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 15:13:41.023209   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 15:14:33.172311   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 15:18:41.032659   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 15:19:16.233013   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 15:19:33.180206   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 15:23:41.039160   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 15:24:33.188243   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-311000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m56.25216927s)

                                                
                                                
-- stdout --
	* [multinode-311000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-311000" primary control-plane node in "multinode-311000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* docker "multinode-311000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-311000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:11:46.147890   67764 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:11:46.148149   67764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:11:46.148154   67764 out.go:304] Setting ErrFile to fd 2...
	I0731 15:11:46.148158   67764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:11:46.148323   67764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:11:46.149773   67764 out.go:298] Setting JSON to false
	I0731 15:11:46.172270   67764 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":20473,"bootTime":1722443433,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0731 15:11:46.172357   67764 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:11:46.194223   67764 out.go:177] * [multinode-311000] minikube v1.33.1 on Darwin 14.5
	I0731 15:11:46.235980   67764 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 15:11:46.236022   67764 notify.go:220] Checking for updates...
	I0731 15:11:46.279322   67764 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	I0731 15:11:46.300956   67764 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0731 15:11:46.321985   67764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:11:46.343140   67764 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	I0731 15:11:46.363937   67764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:11:46.385805   67764 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:11:46.385986   67764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:11:46.411047   67764 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0731 15:11:46.411392   67764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 15:11:46.490061   67764 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:98 SystemTime:2024-07-31 22:11:46.480862016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 15:11:46.511932   67764 out.go:177] * Using the docker driver based on existing profile
	I0731 15:11:46.533612   67764 start.go:297] selected driver: docker
	I0731 15:11:46.533641   67764 start.go:901] validating driver "docker" against &{Name:multinode-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-311000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:11:46.533801   67764 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:11:46.533996   67764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 15:11:46.614094   67764 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:98 SystemTime:2024-07-31 22:11:46.604934053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 15:11:46.617136   67764 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:11:46.617201   67764 cni.go:84] Creating CNI manager for ""
	I0731 15:11:46.617211   67764 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 15:11:46.617291   67764 start.go:340] cluster config:
	{Name:multinode-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:11:46.659933   67764 out.go:177] * Starting "multinode-311000" primary control-plane node in "multinode-311000" cluster
	I0731 15:11:46.681857   67764 cache.go:121] Beginning downloading kic base image for docker with docker
	I0731 15:11:46.703858   67764 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0731 15:11:46.745791   67764 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:11:46.745827   67764 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 15:11:46.745868   67764 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 15:11:46.745923   67764 cache.go:56] Caching tarball of preloaded images
	I0731 15:11:46.746153   67764 preload.go:172] Found /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 15:11:46.746172   67764 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:11:46.747085   67764 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/multinode-311000/config.json ...
	W0731 15:11:46.771963   67764 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0731 15:11:46.771987   67764 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 15:11:46.772127   67764 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 15:11:46.772145   67764 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 15:11:46.772152   67764 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 15:11:46.772160   67764 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 15:11:46.772165   67764 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0731 15:11:46.775135   67764 image.go:273] response: 
	I0731 15:11:46.902685   67764 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0731 15:11:46.902750   67764 cache.go:194] Successfully downloaded all kic artifacts
	I0731 15:11:46.902795   67764 start.go:360] acquireMachinesLock for multinode-311000: {Name:mk7981435695037af8cd786e9a728446a653cd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:11:46.902899   67764 start.go:364] duration metric: took 85.845µs to acquireMachinesLock for "multinode-311000"
	I0731 15:11:46.902923   67764 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:11:46.902934   67764 fix.go:54] fixHost starting: 
	I0731 15:11:46.903179   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:46.920331   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:11:46.920384   67764 fix.go:112] recreateIfNeeded on multinode-311000: state= err=unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:46.920410   67764 fix.go:117] machineExists: false. err=machine does not exist
	I0731 15:11:46.942084   67764 out.go:177] * docker "multinode-311000" container is missing, will recreate.
	I0731 15:11:46.962720   67764 delete.go:124] DEMOLISHING multinode-311000 ...
	I0731 15:11:46.962834   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:46.979729   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	W0731 15:11:46.979775   67764 stop.go:83] unable to get state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:46.979789   67764 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:46.980151   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:46.999210   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:11:46.999271   67764 delete.go:82] Unable to get host status for multinode-311000, assuming it has already been deleted: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:46.999362   67764 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-311000
	W0731 15:11:47.017165   67764 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-311000 returned with exit code 1
	I0731 15:11:47.017205   67764 kic.go:371] could not find the container multinode-311000 to remove it. will try anyways
	I0731 15:11:47.017273   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:47.034217   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	W0731 15:11:47.034266   67764 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:47.034351   67764 cli_runner.go:164] Run: docker exec --privileged -t multinode-311000 /bin/bash -c "sudo init 0"
	W0731 15:11:47.051337   67764 cli_runner.go:211] docker exec --privileged -t multinode-311000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0731 15:11:47.051368   67764 oci.go:650] error shutdown multinode-311000: docker exec --privileged -t multinode-311000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:48.052180   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:48.069392   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:11:48.069437   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:48.069446   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:11:48.069486   67764 retry.go:31] will retry after 675.523435ms: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:48.745277   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:48.762433   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:11:48.762475   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:48.762483   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:11:48.762507   67764 retry.go:31] will retry after 429.124615ms: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:49.191873   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:49.208794   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:11:49.208839   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:49.208849   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:11:49.208874   67764 retry.go:31] will retry after 1.595418257s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:50.805474   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:50.822499   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:11:50.822554   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:50.822564   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:11:50.822588   67764 retry.go:31] will retry after 1.722889806s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:52.545799   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:52.562987   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:11:52.563033   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:52.563049   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:11:52.563069   67764 retry.go:31] will retry after 1.885456493s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:54.448787   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:54.466000   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:11:54.466042   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:54.466054   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:11:54.466077   67764 retry.go:31] will retry after 2.638732755s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:57.105960   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:11:57.124974   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:11:57.125016   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:11:57.125026   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:11:57.125053   67764 retry.go:31] will retry after 7.758972226s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:12:04.886529   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:12:04.906416   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:12:04.906467   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:12:04.906478   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:12:04.906509   67764 oci.go:88] couldn't shut down multinode-311000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	 
	I0731 15:12:04.906592   67764 cli_runner.go:164] Run: docker rm -f -v multinode-311000
	I0731 15:12:04.925129   67764 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-311000
	W0731 15:12:04.943144   67764 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-311000 returned with exit code 1
	I0731 15:12:04.943258   67764 cli_runner.go:164] Run: docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:12:04.960810   67764 cli_runner.go:164] Run: docker network rm multinode-311000
	I0731 15:12:05.042917   67764 fix.go:124] Sleeping 1 second for extra luck!
	I0731 15:12:06.045108   67764 start.go:125] createHost starting for "" (driver="docker")
	I0731 15:12:06.068616   67764 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 15:12:06.068815   67764 start.go:159] libmachine.API.Create for "multinode-311000" (driver="docker")
	I0731 15:12:06.068859   67764 client.go:168] LocalClient.Create starting
	I0731 15:12:06.069072   67764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 15:12:06.069171   67764 main.go:141] libmachine: Decoding PEM data...
	I0731 15:12:06.069206   67764 main.go:141] libmachine: Parsing certificate...
	I0731 15:12:06.069315   67764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 15:12:06.069394   67764 main.go:141] libmachine: Decoding PEM data...
	I0731 15:12:06.069408   67764 main.go:141] libmachine: Parsing certificate...
	I0731 15:12:06.091305   67764 cli_runner.go:164] Run: docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 15:12:06.109938   67764 cli_runner.go:211] docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 15:12:06.110040   67764 network_create.go:284] running [docker network inspect multinode-311000] to gather additional debugging logs...
	I0731 15:12:06.110062   67764 cli_runner.go:164] Run: docker network inspect multinode-311000
	W0731 15:12:06.127750   67764 cli_runner.go:211] docker network inspect multinode-311000 returned with exit code 1
	I0731 15:12:06.127778   67764 network_create.go:287] error running [docker network inspect multinode-311000]: docker network inspect multinode-311000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-311000 not found
	I0731 15:12:06.127792   67764 network_create.go:289] output of [docker network inspect multinode-311000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-311000 not found
	
	** /stderr **
	I0731 15:12:06.127932   67764 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:12:06.147421   67764 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:12:06.148829   67764 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:12:06.149192   67764 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0007a77f0}
	I0731 15:12:06.149208   67764 network_create.go:124] attempt to create docker network multinode-311000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0731 15:12:06.149276   67764 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000
	W0731 15:12:06.176205   67764 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000 returned with exit code 1
	W0731 15:12:06.176239   67764 network_create.go:149] failed to create docker network multinode-311000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0731 15:12:06.176255   67764 network_create.go:116] failed to create docker network multinode-311000 192.168.67.0/24, will retry: subnet is taken
	I0731 15:12:06.177873   67764 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:12:06.178318   67764 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00180c450}
	I0731 15:12:06.178332   67764 network_create.go:124] attempt to create docker network multinode-311000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0731 15:12:06.178466   67764 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000
	I0731 15:12:06.242089   67764 network_create.go:108] docker network multinode-311000 192.168.76.0/24 created
	I0731 15:12:06.242201   67764 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-311000" container
	I0731 15:12:06.242323   67764 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 15:12:06.260421   67764 cli_runner.go:164] Run: docker volume create multinode-311000 --label name.minikube.sigs.k8s.io=multinode-311000 --label created_by.minikube.sigs.k8s.io=true
	I0731 15:12:06.277703   67764 oci.go:103] Successfully created a docker volume multinode-311000
	I0731 15:12:06.277818   67764 cli_runner.go:164] Run: docker run --rm --name multinode-311000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-311000 --entrypoint /usr/bin/test -v multinode-311000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 15:12:06.544849   67764 oci.go:107] Successfully prepared a docker volume multinode-311000
	I0731 15:12:06.544902   67764 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:12:06.544923   67764 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 15:12:06.545067   67764 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-311000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 15:18:06.080896   67764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:18:06.081024   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:06.101889   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:06.102017   67764 retry.go:31] will retry after 172.338638ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:06.276839   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:06.296739   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:06.296848   67764 retry.go:31] will retry after 247.601276ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:06.546460   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:06.566037   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:06.566136   67764 retry.go:31] will retry after 573.481795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:07.141366   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:07.160781   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:07.160891   67764 retry.go:31] will retry after 532.196199ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:07.695547   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:07.715570   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:18:07.715677   67764 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:18:07.715697   67764 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:07.715763   67764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:18:07.715817   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:07.733662   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:07.733750   67764 retry.go:31] will retry after 297.925724ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:08.034127   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:08.054400   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:08.054504   67764 retry.go:31] will retry after 456.587087ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:08.513579   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:08.533014   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:08.533121   67764 retry.go:31] will retry after 471.810075ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:09.006390   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:09.026487   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:18:09.026590   67764 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:18:09.026611   67764 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:09.026625   67764 start.go:128] duration metric: took 6m2.971761223s to createHost
	I0731 15:18:09.026698   67764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:18:09.026749   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:09.044690   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:09.044784   67764 retry.go:31] will retry after 146.367155ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:09.193556   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:09.213166   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:09.213258   67764 retry.go:31] will retry after 319.644108ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:09.534702   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:09.554102   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:09.554199   67764 retry.go:31] will retry after 447.095928ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:10.003695   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:10.023730   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:18:10.023838   67764 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:18:10.023854   67764 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:10.023911   67764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:18:10.023971   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:10.042638   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:10.042731   67764 retry.go:31] will retry after 176.941381ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:10.221557   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:10.241333   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:10.241426   67764 retry.go:31] will retry after 497.771428ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:10.741691   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:10.761201   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:10.761297   67764 retry.go:31] will retry after 455.709928ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:11.217821   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:11.236858   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:18:11.236954   67764 retry.go:31] will retry after 778.874841ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:12.018256   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:18:12.038150   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:18:12.038253   67764 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:18:12.038272   67764 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:12.038280   67764 fix.go:56] duration metric: took 6m25.125066478s for fixHost
	I0731 15:18:12.038286   67764 start.go:83] releasing machines lock for "multinode-311000", held for 6m25.125096149s
	W0731 15:18:12.038303   67764 start.go:714] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0731 15:18:12.038367   67764 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0731 15:18:12.038373   67764 start.go:729] Will try again in 5 seconds ...
	I0731 15:18:17.040738   67764 start.go:360] acquireMachinesLock for multinode-311000: {Name:mk7981435695037af8cd786e9a728446a653cd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:18:17.040982   67764 start.go:364] duration metric: took 204.192µs to acquireMachinesLock for "multinode-311000"
	I0731 15:18:17.041021   67764 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:18:17.041029   67764 fix.go:54] fixHost starting: 
	I0731 15:18:17.041510   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:18:17.060804   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:18:17.060848   67764 fix.go:112] recreateIfNeeded on multinode-311000: state= err=unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:17.060863   67764 fix.go:117] machineExists: false. err=machine does not exist
	I0731 15:18:17.082716   67764 out.go:177] * docker "multinode-311000" container is missing, will recreate.
	I0731 15:18:17.125224   67764 delete.go:124] DEMOLISHING multinode-311000 ...
	I0731 15:18:17.125472   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:18:17.144003   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	W0731 15:18:17.144051   67764 stop.go:83] unable to get state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:17.144068   67764 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:17.144443   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:18:17.161279   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:18:17.161334   67764 delete.go:82] Unable to get host status for multinode-311000, assuming it has already been deleted: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:17.161422   67764 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-311000
	W0731 15:18:17.178580   67764 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-311000 returned with exit code 1
	I0731 15:18:17.178608   67764 kic.go:371] could not find the container multinode-311000 to remove it. will try anyways
	I0731 15:18:17.178685   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:18:17.195837   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	W0731 15:18:17.195879   67764 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:17.195961   67764 cli_runner.go:164] Run: docker exec --privileged -t multinode-311000 /bin/bash -c "sudo init 0"
	W0731 15:18:17.213119   67764 cli_runner.go:211] docker exec --privileged -t multinode-311000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0731 15:18:17.213147   67764 oci.go:650] error shutdown multinode-311000: docker exec --privileged -t multinode-311000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:18.213418   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:18:18.232258   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:18:18.232303   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:18.232317   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:18:18.232344   67764 retry.go:31] will retry after 477.766616ms: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:18.711243   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:18:18.731456   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:18:18.731512   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:18.731522   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:18:18.731552   67764 retry.go:31] will retry after 482.717876ms: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:19.215549   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:18:19.235877   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:18:19.235922   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:19.235944   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:18:19.235971   67764 retry.go:31] will retry after 884.7906ms: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:20.123116   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:18:20.142784   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:18:20.142830   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:20.142841   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:18:20.142868   67764 retry.go:31] will retry after 2.305700132s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:22.451037   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:18:22.471229   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:18:22.471269   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:22.471282   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:18:22.471306   67764 retry.go:31] will retry after 2.640971991s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:25.114734   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:18:25.134180   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:18:25.134233   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:25.134241   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:18:25.134266   67764 retry.go:31] will retry after 2.768700457s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:27.905440   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:18:27.925419   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:18:27.925464   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:27.925475   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:18:27.925500   67764 retry.go:31] will retry after 6.692251745s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:34.618833   67764 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:18:34.638972   67764 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:18:34.639027   67764 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:18:34.639038   67764 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:18:34.639070   67764 oci.go:88] couldn't shut down multinode-311000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	 
	I0731 15:18:34.639142   67764 cli_runner.go:164] Run: docker rm -f -v multinode-311000
	I0731 15:18:34.657573   67764 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-311000
	W0731 15:18:34.674874   67764 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-311000 returned with exit code 1
	I0731 15:18:34.674985   67764 cli_runner.go:164] Run: docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:18:34.692694   67764 cli_runner.go:164] Run: docker network rm multinode-311000
	I0731 15:18:34.779911   67764 fix.go:124] Sleeping 1 second for extra luck!
	I0731 15:18:35.780684   67764 start.go:125] createHost starting for "" (driver="docker")
	I0731 15:18:35.802179   67764 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 15:18:35.802276   67764 start.go:159] libmachine.API.Create for "multinode-311000" (driver="docker")
	I0731 15:18:35.802294   67764 client.go:168] LocalClient.Create starting
	I0731 15:18:35.802412   67764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 15:18:35.802466   67764 main.go:141] libmachine: Decoding PEM data...
	I0731 15:18:35.802478   67764 main.go:141] libmachine: Parsing certificate...
	I0731 15:18:35.802532   67764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 15:18:35.802570   67764 main.go:141] libmachine: Decoding PEM data...
	I0731 15:18:35.802578   67764 main.go:141] libmachine: Parsing certificate...
	I0731 15:18:35.823719   67764 cli_runner.go:164] Run: docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 15:18:35.841300   67764 cli_runner.go:211] docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 15:18:35.841394   67764 network_create.go:284] running [docker network inspect multinode-311000] to gather additional debugging logs...
	I0731 15:18:35.841412   67764 cli_runner.go:164] Run: docker network inspect multinode-311000
	W0731 15:18:35.858443   67764 cli_runner.go:211] docker network inspect multinode-311000 returned with exit code 1
	I0731 15:18:35.858478   67764 network_create.go:287] error running [docker network inspect multinode-311000]: docker network inspect multinode-311000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-311000 not found
	I0731 15:18:35.858490   67764 network_create.go:289] output of [docker network inspect multinode-311000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-311000 not found
	
	** /stderr **
	I0731 15:18:35.858638   67764 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:18:35.878200   67764 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:18:35.879772   67764 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:18:35.881298   67764 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:18:35.882719   67764 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:18:35.883055   67764 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00141c2d0}
	I0731 15:18:35.883067   67764 network_create.go:124] attempt to create docker network multinode-311000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0731 15:18:35.883137   67764 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000
	I0731 15:18:35.946021   67764 network_create.go:108] docker network multinode-311000 192.168.85.0/24 created
	I0731 15:18:35.946052   67764 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-311000" container
	I0731 15:18:35.946171   67764 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 15:18:35.964505   67764 cli_runner.go:164] Run: docker volume create multinode-311000 --label name.minikube.sigs.k8s.io=multinode-311000 --label created_by.minikube.sigs.k8s.io=true
	I0731 15:18:35.981662   67764 oci.go:103] Successfully created a docker volume multinode-311000
	I0731 15:18:35.981773   67764 cli_runner.go:164] Run: docker run --rm --name multinode-311000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-311000 --entrypoint /usr/bin/test -v multinode-311000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 15:18:36.240628   67764 oci.go:107] Successfully prepared a docker volume multinode-311000
	I0731 15:18:36.240658   67764 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:18:36.240671   67764 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 15:18:36.240790   67764 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-311000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0731 15:24:35.814227   67764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:24:35.814356   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:35.834005   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:35.834121   67764 retry.go:31] will retry after 325.547352ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:36.160119   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:36.195998   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:36.196116   67764 retry.go:31] will retry after 365.578496ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:36.564115   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:36.584200   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:36.584295   67764 retry.go:31] will retry after 730.747842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:37.316355   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:37.336251   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:24:37.336363   67764 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:24:37.336383   67764 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:37.336452   67764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:24:37.336514   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:37.354199   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:37.354307   67764 retry.go:31] will retry after 281.803899ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:37.637015   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:37.657566   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:37.657667   67764 retry.go:31] will retry after 424.564347ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:38.084609   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:38.103748   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:38.103854   67764 retry.go:31] will retry after 708.30591ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:38.814572   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:38.834438   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:24:38.834543   67764 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:24:38.834557   67764 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:38.834571   67764 start.go:128] duration metric: took 6m3.04417942s to createHost
	I0731 15:24:38.834639   67764 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 15:24:38.834700   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:38.852522   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:38.852612   67764 retry.go:31] will retry after 263.69181ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:39.118628   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:39.138935   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:39.139027   67764 retry.go:31] will retry after 244.274692ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:39.385758   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:39.404952   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:39.405048   67764 retry.go:31] will retry after 793.888641ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:40.201390   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:40.221472   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:24:40.221578   67764 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:24:40.221597   67764 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:40.221655   67764 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0731 15:24:40.221715   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:40.239170   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:40.239272   67764 retry.go:31] will retry after 136.705993ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:40.378397   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:40.398326   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:40.398422   67764 retry.go:31] will retry after 274.610171ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:40.674447   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:40.694846   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:40.694958   67764 retry.go:31] will retry after 675.884092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:41.371131   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:41.390048   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	I0731 15:24:41.390143   67764 retry.go:31] will retry after 780.46955ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:42.172158   67764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000
	W0731 15:24:42.192150   67764 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000 returned with exit code 1
	W0731 15:24:42.192259   67764 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	W0731 15:24:42.192277   67764 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-311000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-311000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:42.192288   67764 fix.go:56] duration metric: took 6m25.140976657s for fixHost
	I0731 15:24:42.192295   67764 start.go:83] releasing machines lock for "multinode-311000", held for 6m25.141015579s
	W0731 15:24:42.192374   67764 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-311000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-311000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0731 15:24:42.235747   67764 out.go:177] 
	W0731 15:24:42.257020   67764 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0731 15:24:42.257092   67764 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0731 15:24:42.257187   67764 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0731 15:24:42.299674   67764 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-311000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-311000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "202312dd0ba0cc97757ecc9e1b61df6d4ccfef96938c1abf743580307644bc3e",
	        "Created": "2024-07-31T22:18:35.898296895Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (74.391467ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:24:42.531644   68028 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (790.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 node delete m03: exit status 80 (162.183691ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-311000 host status: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-311000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status --alsologtostderr: exit status 7 (76.335042ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:24:42.751482   68034 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:24:42.751750   68034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:24:42.751755   68034 out.go:304] Setting ErrFile to fd 2...
	I0731 15:24:42.751759   68034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:24:42.751942   68034 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:24:42.752128   68034 out.go:298] Setting JSON to false
	I0731 15:24:42.752151   68034 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:24:42.752197   68034 notify.go:220] Checking for updates...
	I0731 15:24:42.752468   68034 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:24:42.752485   68034 status.go:255] checking status of multinode-311000 ...
	I0731 15:24:42.752855   68034 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:24:42.770360   68034 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:24:42.770424   68034 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:24:42.770447   68034 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:24:42.770467   68034 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:24:42.770474   68034 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-311000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "202312dd0ba0cc97757ecc9e1b61df6d4ccfef96938c1abf743580307644bc3e",
	        "Created": "2024-07-31T22:18:35.898296895Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (75.14785ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:24:42.866933   68038 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (14.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 stop: exit status 82 (13.887062092s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-311000"  ...
	* Stopping node "multinode-311000"  ...
	* Stopping node "multinode-311000"  ...
	* Stopping node "multinode-311000"  ...
	* Stopping node "multinode-311000"  ...
	* Stopping node "multinode-311000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-311000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-311000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status: exit status 7 (75.506952ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:24:56.830245   68052 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:24:56.830255   68052 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-311000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-311000 status --alsologtostderr: exit status 7 (75.459384ms)

                                                
                                                
-- stdout --
	multinode-311000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:24:56.886912   68055 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:24:56.887191   68055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:24:56.887196   68055 out.go:304] Setting ErrFile to fd 2...
	I0731 15:24:56.887200   68055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:24:56.887386   68055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:24:56.887563   68055 out.go:298] Setting JSON to false
	I0731 15:24:56.887586   68055 mustload.go:65] Loading cluster: multinode-311000
	I0731 15:24:56.887617   68055 notify.go:220] Checking for updates...
	I0731 15:24:56.887846   68055 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:24:56.887864   68055 status.go:255] checking status of multinode-311000 ...
	I0731 15:24:56.888271   68055 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:24:56.905694   68055 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:24:56.905749   68055 status.go:330] multinode-311000 host status = "" (err=state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	)
	I0731 15:24:56.905769   68055 status.go:257] multinode-311000 status: &{Name:multinode-311000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 15:24:56.905790   68055 status.go:260] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	E0731 15:24:56.905799   68055 status.go:263] The "multinode-311000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-311000 status --alsologtostderr": multinode-311000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-311000 status --alsologtostderr": multinode-311000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "202312dd0ba0cc97757ecc9e1b61df6d4ccfef96938c1abf743580307644bc3e",
	        "Created": "2024-07-31T22:18:35.898296895Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (75.316303ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:24:57.002355   68059 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (14.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-311000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-311000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m24.270105369s)

                                                
                                                
-- stdout --
	* [multinode-311000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-311000" primary control-plane node in "multinode-311000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* docker "multinode-311000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 15:24:57.057673   68062 out.go:291] Setting OutFile to fd 1 ...
	I0731 15:24:57.057931   68062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:24:57.057936   68062 out.go:304] Setting ErrFile to fd 2...
	I0731 15:24:57.057940   68062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 15:24:57.058116   68062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 15:24:57.059550   68062 out.go:298] Setting JSON to false
	I0731 15:24:57.082044   68062 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":21264,"bootTime":1722443433,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0731 15:24:57.082147   68062 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 15:24:57.103963   68062 out.go:177] * [multinode-311000] minikube v1.33.1 on Darwin 14.5
	I0731 15:24:57.145770   68062 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 15:24:57.145851   68062 notify.go:220] Checking for updates...
	I0731 15:24:57.188577   68062 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	I0731 15:24:57.209717   68062 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0731 15:24:57.230446   68062 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 15:24:57.251791   68062 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	I0731 15:24:57.272845   68062 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 15:24:57.294270   68062 config.go:182] Loaded profile config "multinode-311000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 15:24:57.295015   68062 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 15:24:57.318874   68062 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0731 15:24:57.319033   68062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 15:24:57.399948   68062 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:122 SystemTime:2024-07-31 22:24:57.391163262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 15:24:57.442504   68062 out.go:177] * Using the docker driver based on existing profile
	I0731 15:24:57.463373   68062 start.go:297] selected driver: docker
	I0731 15:24:57.463401   68062 start.go:901] validating driver "docker" against &{Name:multinode-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-311000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:24:57.463519   68062 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 15:24:57.463715   68062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 15:24:57.544513   68062 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:122 SystemTime:2024-07-31 22:24:57.535953236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 15:24:57.547732   68062 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 15:24:57.547770   68062 cni.go:84] Creating CNI manager for ""
	I0731 15:24:57.547778   68062 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 15:24:57.547852   68062 start.go:340] cluster config:
	{Name:multinode-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 15:24:57.590469   68062 out.go:177] * Starting "multinode-311000" primary control-plane node in "multinode-311000" cluster
	I0731 15:24:57.611448   68062 cache.go:121] Beginning downloading kic base image for docker with docker
	I0731 15:24:57.632469   68062 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0731 15:24:57.674461   68062 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:24:57.674539   68062 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 15:24:57.674534   68062 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 15:24:57.674559   68062 cache.go:56] Caching tarball of preloaded images
	I0731 15:24:57.674790   68062 preload.go:172] Found /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0731 15:24:57.674810   68062 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 15:24:57.675006   68062 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/multinode-311000/config.json ...
	W0731 15:24:57.700672   68062 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0731 15:24:57.700689   68062 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 15:24:57.700808   68062 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 15:24:57.700830   68062 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 15:24:57.700837   68062 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 15:24:57.700845   68062 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 15:24:57.700850   68062 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0731 15:24:57.703956   68062 image.go:273] response: 
	I0731 15:24:57.846462   68062 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0731 15:24:57.846513   68062 cache.go:194] Successfully downloaded all kic artifacts
	I0731 15:24:57.846556   68062 start.go:360] acquireMachinesLock for multinode-311000: {Name:mk7981435695037af8cd786e9a728446a653cd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 15:24:57.846666   68062 start.go:364] duration metric: took 90.109µs to acquireMachinesLock for "multinode-311000"
	I0731 15:24:57.846690   68062 start.go:96] Skipping create...Using existing machine configuration
	I0731 15:24:57.846699   68062 fix.go:54] fixHost starting: 
	I0731 15:24:57.846935   68062 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:24:57.864992   68062 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:24:57.865049   68062 fix.go:112] recreateIfNeeded on multinode-311000: state= err=unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:57.865074   68062 fix.go:117] machineExists: false. err=machine does not exist
	I0731 15:24:57.886445   68062 out.go:177] * docker "multinode-311000" container is missing, will recreate.
	I0731 15:24:57.906959   68062 delete.go:124] DEMOLISHING multinode-311000 ...
	I0731 15:24:57.907075   68062 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:24:57.964288   68062 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	W0731 15:24:57.964336   68062 stop.go:83] unable to get state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:57.964350   68062 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:57.964737   68062 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:24:57.988016   68062 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:24:57.988073   68062 delete.go:82] Unable to get host status for multinode-311000, assuming it has already been deleted: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:57.988165   68062 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-311000
	W0731 15:24:58.005338   68062 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-311000 returned with exit code 1
	I0731 15:24:58.005373   68062 kic.go:371] could not find the container multinode-311000 to remove it. will try anyways
	I0731 15:24:58.005449   68062 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:24:58.022785   68062 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	W0731 15:24:58.022836   68062 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:58.022918   68062 cli_runner.go:164] Run: docker exec --privileged -t multinode-311000 /bin/bash -c "sudo init 0"
	W0731 15:24:58.039677   68062 cli_runner.go:211] docker exec --privileged -t multinode-311000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0731 15:24:58.039705   68062 oci.go:650] error shutdown multinode-311000: docker exec --privileged -t multinode-311000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:59.040184   68062 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:24:59.057350   68062 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:24:59.057396   68062 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:59.057408   68062 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:24:59.057450   68062 retry.go:31] will retry after 375.668406ms: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:59.433302   68062 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:24:59.450868   68062 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:24:59.450917   68062 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:24:59.450929   68062 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:24:59.450953   68062 retry.go:31] will retry after 1.001531137s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:25:00.452976   68062 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:25:00.470198   68062 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:25:00.470248   68062 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:25:00.470260   68062 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:25:00.470283   68062 retry.go:31] will retry after 1.446046221s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:25:01.916767   68062 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:25:01.933605   68062 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:25:01.933652   68062 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:25:01.933661   68062 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:25:01.933684   68062 retry.go:31] will retry after 1.598363742s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:25:03.532403   68062 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:25:03.549996   68062 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:25:03.550040   68062 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:25:03.550048   68062 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:25:03.550072   68062 retry.go:31] will retry after 2.313903671s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:25:05.864268   68062 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:25:05.883022   68062 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:25:05.883338   68062 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:25:05.883352   68062 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:25:05.883377   68062 retry.go:31] will retry after 5.588461934s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:25:11.474505   68062 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:25:11.495056   68062 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:25:11.495101   68062 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:25:11.495109   68062 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:25:11.495135   68062 retry.go:31] will retry after 4.584835621s: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:25:16.082500   68062 cli_runner.go:164] Run: docker container inspect multinode-311000 --format={{.State.Status}}
	W0731 15:25:16.102660   68062 cli_runner.go:211] docker container inspect multinode-311000 --format={{.State.Status}} returned with exit code 1
	I0731 15:25:16.102720   68062 oci.go:662] temporary error verifying shutdown: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	I0731 15:25:16.102730   68062 oci.go:664] temporary error: container multinode-311000 status is  but expect it to be exited
	I0731 15:25:16.102757   68062 oci.go:88] couldn't shut down multinode-311000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000
	 
	I0731 15:25:16.102845   68062 cli_runner.go:164] Run: docker rm -f -v multinode-311000
	I0731 15:25:16.121488   68062 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-311000
	W0731 15:25:16.139015   68062 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-311000 returned with exit code 1
	I0731 15:25:16.139121   68062 cli_runner.go:164] Run: docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:25:16.157136   68062 cli_runner.go:164] Run: docker network rm multinode-311000
	I0731 15:25:16.233645   68062 fix.go:124] Sleeping 1 second for extra luck!
	I0731 15:25:17.233879   68062 start.go:125] createHost starting for "" (driver="docker")
	I0731 15:25:17.256012   68062 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0731 15:25:17.256203   68062 start.go:159] libmachine.API.Create for "multinode-311000" (driver="docker")
	I0731 15:25:17.256250   68062 client.go:168] LocalClient.Create starting
	I0731 15:25:17.256453   68062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/ca.pem
	I0731 15:25:17.256551   68062 main.go:141] libmachine: Decoding PEM data...
	I0731 15:25:17.256588   68062 main.go:141] libmachine: Parsing certificate...
	I0731 15:25:17.256685   68062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19360-61501/.minikube/certs/cert.pem
	I0731 15:25:17.256769   68062 main.go:141] libmachine: Decoding PEM data...
	I0731 15:25:17.256784   68062 main.go:141] libmachine: Parsing certificate...
	I0731 15:25:17.278254   68062 cli_runner.go:164] Run: docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0731 15:25:17.297568   68062 cli_runner.go:211] docker network inspect multinode-311000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0731 15:25:17.297673   68062 network_create.go:284] running [docker network inspect multinode-311000] to gather additional debugging logs...
	I0731 15:25:17.297692   68062 cli_runner.go:164] Run: docker network inspect multinode-311000
	W0731 15:25:17.315058   68062 cli_runner.go:211] docker network inspect multinode-311000 returned with exit code 1
	I0731 15:25:17.315085   68062 network_create.go:287] error running [docker network inspect multinode-311000]: docker network inspect multinode-311000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-311000 not found
	I0731 15:25:17.315095   68062 network_create.go:289] output of [docker network inspect multinode-311000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-311000 not found
	
	** /stderr **
	I0731 15:25:17.315241   68062 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0731 15:25:17.335296   68062 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:25:17.336903   68062 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:25:17.337255   68062 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014e7960}
	I0731 15:25:17.337272   68062 network_create.go:124] attempt to create docker network multinode-311000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0731 15:25:17.337349   68062 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000
	W0731 15:25:17.355868   68062 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000 returned with exit code 1
	W0731 15:25:17.355901   68062 network_create.go:149] failed to create docker network multinode-311000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0731 15:25:17.355923   68062 network_create.go:116] failed to create docker network multinode-311000 192.168.67.0/24, will retry: subnet is taken
	I0731 15:25:17.357497   68062 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 15:25:17.357877   68062 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00188ebc0}
	I0731 15:25:17.357893   68062 network_create.go:124] attempt to create docker network multinode-311000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0731 15:25:17.357961   68062 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-311000 multinode-311000
	I0731 15:25:17.422081   68062 network_create.go:108] docker network multinode-311000 192.168.76.0/24 created
	I0731 15:25:17.422129   68062 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-311000" container
	I0731 15:25:17.422246   68062 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0731 15:25:17.440572   68062 cli_runner.go:164] Run: docker volume create multinode-311000 --label name.minikube.sigs.k8s.io=multinode-311000 --label created_by.minikube.sigs.k8s.io=true
	I0731 15:25:17.457665   68062 oci.go:103] Successfully created a docker volume multinode-311000
	I0731 15:25:17.457786   68062 cli_runner.go:164] Run: docker run --rm --name multinode-311000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-311000 --entrypoint /usr/bin/test -v multinode-311000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0731 15:25:17.721597   68062 oci.go:107] Successfully prepared a docker volume multinode-311000
	I0731 15:25:17.721645   68062 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 15:25:17.721660   68062 kic.go:194] Starting extracting preloaded images to volume ...
	I0731 15:25:17.721784   68062 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-311000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-311000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-311000
helpers_test.go:235: (dbg) docker inspect multinode-311000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-311000",
	        "Id": "5587f087607cb9a07b356557a45e5fc16b186121529089ef76890a527368f0cf",
	        "Created": "2024-07-31T22:25:17.373425225Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-311000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-311000 -n multinode-311000: exit status 7 (80.605938ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:26:21.380907   68139 status.go:249] status error: host: state: unknown state "multinode-311000": docker container inspect multinode-311000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-311000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-311000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (84.38s)

                                                
                                    
x
+
TestScheduledStopUnix (300.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-410000 --memory=2048 --driver=docker 
E0731 15:28:41.061665   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 15:29:33.217432   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 15:30:04.223924   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-410000 --memory=2048 --driver=docker : signal: killed (5m0.004508019s)

                                                
                                                
-- stdout --
	* [scheduled-stop-410000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-410000" primary control-plane node in "scheduled-stop-410000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-410000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-410000" primary control-plane node in "scheduled-stop-410000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-31 15:33:36.053032 -0700 PDT m=+4756.795038498
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-410000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-410000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-410000",
	        "Id": "19bf46df45700eed1f362267cd02d02b431faafcefa333489c8672791e2f8de0",
	        "Created": "2024-07-31T22:28:37.019016818Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-410000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-410000 -n scheduled-stop-410000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-410000 -n scheduled-stop-410000: exit status 7 (76.708503ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:33:36.151480   68833 status.go:249] status error: host: state: unknown state "scheduled-stop-410000": docker container inspect scheduled-stop-410000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-410000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-410000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-410000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-410000
--- FAIL: TestScheduledStopUnix (300.54s)

                                                
                                    
x
+
TestSkaffold (300.55s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2427056038 version
skaffold_test.go:59: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2427056038 version: (1.72165507s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-438000 --memory=2600 --driver=docker 
E0731 15:33:41.074451   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 15:34:33.222970   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 15:35:56.278352   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-438000 --memory=2600 --driver=docker : signal: killed (4m57.171595814s)

                                                
                                                
-- stdout --
	* [skaffold-438000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-438000" primary control-plane node in "skaffold-438000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-438000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-438000" primary control-plane node in "skaffold-438000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-07-31 15:38:36.602128 -0700 PDT m=+5057.340027883
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-438000
helpers_test.go:235: (dbg) docker inspect skaffold-438000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-438000",
	        "Id": "0c8f7b80efb2c5e85166e9b845084205b67d9181a5d59bcf2b9a5f26640eb9e9",
	        "Created": "2024-07-31T22:33:40.407977771Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-438000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-438000 -n skaffold-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-438000 -n skaffold-438000: exit status 7 (74.650657ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 15:38:36.696017   68953 status.go:249] status error: host: state: unknown state "skaffold-438000": docker container inspect skaffold-438000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-438000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-438000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-438000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-438000
--- FAIL: TestSkaffold (300.55s)

                                                
                                    
x
+
TestInsufficientStorage (300.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-780000 --memory=2048 --output=json --wait=true --driver=docker 
E0731 15:38:41.078563   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 15:39:33.225840   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-780000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.005136867s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b5fbe9cd-d911-4343-b923-a946b291b6a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-780000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"db87da5f-816b-4729-b8de-556401bb9b85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19360"}}
	{"specversion":"1.0","id":"0baa80bd-f527-498a-b452-562a6837d900","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig"}}
	{"specversion":"1.0","id":"e9154b82-3435-4b91-805e-07cba317a99e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"7d52636a-d39d-4ef5-8227-96b552ceb13b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0619f649-abe2-4a3c-b10c-84ab5ea4d310","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube"}}
	{"specversion":"1.0","id":"bbb1bec0-e756-4b02-84e3-ff696cde3915","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e0772b06-5b15-4823-a78a-5115d2b7bbb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"287f5592-7301-4a22-a6d2-24dd41afe69e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d17bf900-78e5-47a2-a81e-4c0722b3f0c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e0aba15-04a5-4444-bc82-cfd69f599ecd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"10f24cfc-bb11-400b-95eb-d1ac6ff1a537","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-780000\" primary control-plane node in \"insufficient-storage-780000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f411d0e-519e-404d-a2be-674c2f9fc36e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721902582-19326 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e064ad3f-1506-44a2-8b66-9df9fbed7f2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-780000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-780000 --output=json --layout=cluster: context deadline exceeded (973ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-780000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-780000
--- FAIL: TestInsufficientStorage (300.45s)

                                                
                                    

Test pass (169/210)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.66
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.34
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.30.3/json-events 9.72
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.3
18 TestDownloadOnly/v1.30.3/DeleteAll 0.35
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-beta.0/json-events 11.12
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.3
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.34
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.21
29 TestDownloadOnlyKic 1.51
30 TestBinaryMirror 1.31
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
36 TestAddons/Setup 221.07
38 TestAddons/serial/Volcano 40.34
40 TestAddons/serial/GCPAuth/Namespaces 0.1
44 TestAddons/parallel/InspektorGadget 10.81
45 TestAddons/parallel/MetricsServer 5.65
46 TestAddons/parallel/HelmTiller 13.6
48 TestAddons/parallel/CSI 63.38
49 TestAddons/parallel/Headlamp 17.79
50 TestAddons/parallel/CloudSpanner 5.53
51 TestAddons/parallel/LocalPath 52.17
52 TestAddons/parallel/NvidiaDevicePlugin 5.49
53 TestAddons/parallel/Yakd 10.61
54 TestAddons/StoppedEnableDisable 11.37
62 TestHyperKitDriverInstallOrUpdate 7.54
65 TestErrorSpam/setup 21.58
66 TestErrorSpam/start 2.22
67 TestErrorSpam/status 0.81
68 TestErrorSpam/pause 1.4
69 TestErrorSpam/unpause 1.44
70 TestErrorSpam/stop 11.25
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 36.54
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 33.56
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.27
82 TestFunctional/serial/CacheCmd/cache/add_local 1.39
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.45
87 TestFunctional/serial/CacheCmd/cache/delete 0.17
88 TestFunctional/serial/MinikubeKubectlCmd 1.17
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.49
90 TestFunctional/serial/ExtraConfig 39.75
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 2.87
93 TestFunctional/serial/LogsFileCmd 2.92
94 TestFunctional/serial/InvalidService 5.64
96 TestFunctional/parallel/ConfigCmd 0.52
97 TestFunctional/parallel/DashboardCmd 11.71
98 TestFunctional/parallel/DryRun 1.18
99 TestFunctional/parallel/InternationalLanguage 0.66
100 TestFunctional/parallel/StatusCmd 0.84
105 TestFunctional/parallel/AddonsCmd 0.25
106 TestFunctional/parallel/PersistentVolumeClaim 27.84
108 TestFunctional/parallel/SSHCmd 0.51
109 TestFunctional/parallel/CpCmd 1.7
110 TestFunctional/parallel/MySQL 29.32
111 TestFunctional/parallel/FileSync 0.26
112 TestFunctional/parallel/CertSync 1.53
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
120 TestFunctional/parallel/License 0.51
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.46
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.16
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
132 TestFunctional/parallel/ServiceCmd/DeployApp 7.12
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
134 TestFunctional/parallel/ProfileCmd/profile_list 0.37
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
136 TestFunctional/parallel/ServiceCmd/List 0.66
137 TestFunctional/parallel/MountCmd/any-port 6.71
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.67
139 TestFunctional/parallel/ServiceCmd/HTTPS 15
140 TestFunctional/parallel/MountCmd/specific-port 1.69
141 TestFunctional/parallel/MountCmd/VerifyCleanup 2.01
142 TestFunctional/parallel/ServiceCmd/Format 15
143 TestFunctional/parallel/Version/short 0.11
144 TestFunctional/parallel/Version/components 0.7
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
149 TestFunctional/parallel/ImageCommands/ImageBuild 3.1
150 TestFunctional/parallel/ImageCommands/Setup 1.69
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.07
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
153 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.61
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
155 TestFunctional/parallel/ServiceCmd/URL 15
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
159 TestFunctional/parallel/DockerEnv/bash 0.93
160 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
161 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
162 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 108.7
170 TestMultiControlPlane/serial/DeployApp 5.23
171 TestMultiControlPlane/serial/PingHostFromPods 1.38
172 TestMultiControlPlane/serial/AddWorkerNode 18.95
173 TestMultiControlPlane/serial/NodeLabels 0.06
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.68
175 TestMultiControlPlane/serial/CopyFile 16.22
176 TestMultiControlPlane/serial/StopSecondaryNode 11.35
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
178 TestMultiControlPlane/serial/RestartSecondaryNode 23.61
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 209.76
181 TestMultiControlPlane/serial/DeleteSecondaryNode 10.33
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.49
183 TestMultiControlPlane/serial/StopCluster 32.56
184 TestMultiControlPlane/serial/RestartCluster 86.35
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.48
186 TestMultiControlPlane/serial/AddSecondaryNode 35.56
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.67
190 TestImageBuild/serial/Setup 20.82
191 TestImageBuild/serial/NormalBuild 1.76
192 TestImageBuild/serial/BuildWithBuildArg 0.83
193 TestImageBuild/serial/BuildWithDockerIgnore 0.64
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.68
198 TestJSONOutput/start/Command 75.24
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.49
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.48
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 5.71
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.58
223 TestKicCustomNetwork/create_custom_network 23.12
224 TestKicCustomNetwork/use_default_bridge_network 22.61
225 TestKicExistingNetwork 22.31
226 TestKicCustomSubnet 22.41
227 TestKicStaticIP 22.66
228 TestMainNoArgs 0.08
229 TestMinikubeProfile 48.16
232 TestMountStart/serial/StartWithMountFirst 7.04
252 TestPreload 134.1
273 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 10.23
274 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.43
x
+
TestDownloadOnly/v1.20.0/json-events (13.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-962000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-962000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (13.657145733s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-962000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-962000: exit status 85 (297.017117ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-962000 | jenkins | v1.33.1 | 31 Jul 24 14:14 PDT |          |
	|         | -p download-only-962000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 14:14:18
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 14:14:18.990888   62039 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:14:18.991179   62039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:14:18.991184   62039 out.go:304] Setting ErrFile to fd 2...
	I0731 14:14:18.991187   62039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:14:18.991369   62039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	W0731 14:14:18.991462   62039 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19360-61501/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19360-61501/.minikube/config/config.json: no such file or directory
	I0731 14:14:18.993550   62039 out.go:298] Setting JSON to true
	I0731 14:14:19.017835   62039 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":17027,"bootTime":1722443432,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0731 14:14:19.017925   62039 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:14:19.039745   62039 out.go:97] [download-only-962000] minikube v1.33.1 on Darwin 14.5
	I0731 14:14:19.039975   62039 notify.go:220] Checking for updates...
	W0731 14:14:19.039957   62039 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 14:14:19.061311   62039 out.go:169] MINIKUBE_LOCATION=19360
	I0731 14:14:19.083565   62039 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	I0731 14:14:19.105362   62039 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0731 14:14:19.126343   62039 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:14:19.147346   62039 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	W0731 14:14:19.189303   62039 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 14:14:19.189804   62039 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:14:19.213614   62039 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0731 14:14:19.213766   62039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 14:14:19.297370   62039 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:69 SystemTime:2024-07-31 21:14:19.288498489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 14:14:19.318504   62039 out.go:97] Using the docker driver based on user configuration
	I0731 14:14:19.318587   62039 start.go:297] selected driver: docker
	I0731 14:14:19.318598   62039 start.go:901] validating driver "docker" against <nil>
	I0731 14:14:19.318828   62039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 14:14:19.403426   62039 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:69 SystemTime:2024-07-31 21:14:19.39425033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:htt
ps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-
g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-des
ktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugi
ns/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 14:14:19.403623   62039 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 14:14:19.406797   62039 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0731 14:14:19.406944   62039 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 14:14:19.428785   62039 out.go:169] Using Docker Desktop driver with root privileges
	I0731 14:14:19.450550   62039 cni.go:84] Creating CNI manager for ""
	I0731 14:14:19.450591   62039 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 14:14:19.450725   62039 start.go:340] cluster config:
	{Name:download-only-962000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-962000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:14:19.472486   62039 out.go:97] Starting "download-only-962000" primary control-plane node in "download-only-962000" cluster
	I0731 14:14:19.472549   62039 cache.go:121] Beginning downloading kic base image for docker with docker
	I0731 14:14:19.494573   62039 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0731 14:14:19.494636   62039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 14:14:19.494704   62039 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 14:14:19.512453   62039 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 14:14:19.513373   62039 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 14:14:19.513531   62039 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 14:14:19.547443   62039 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0731 14:14:19.547458   62039 cache.go:56] Caching tarball of preloaded images
	I0731 14:14:19.547636   62039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 14:14:19.568475   62039 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 14:14:19.568493   62039 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0731 14:14:19.664014   62039 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0731 14:14:27.205962   62039 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0731 14:14:27.206200   62039 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0731 14:14:27.767104   62039 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 14:14:27.767329   62039 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/download-only-962000/config.json ...
	I0731 14:14:27.767352   62039 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/download-only-962000/config.json: {Name:mk57a4c084093e68e45703eac9618f5003731716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 14:14:27.767648   62039 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 14:14:27.768342   62039 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-962000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-962000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-962000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (9.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-543000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-543000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker : (9.720415712s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (9.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-543000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-543000: exit status 85 (297.17489ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-962000 | jenkins | v1.33.1 | 31 Jul 24 14:14 PDT |                     |
	|         | -p download-only-962000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 14:14 PDT | 31 Jul 24 14:14 PDT |
	| delete  | -p download-only-962000        | download-only-962000 | jenkins | v1.33.1 | 31 Jul 24 14:14 PDT | 31 Jul 24 14:14 PDT |
	| start   | -o=json --download-only        | download-only-543000 | jenkins | v1.33.1 | 31 Jul 24 14:14 PDT |                     |
	|         | -p download-only-543000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 14:14:33
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 14:14:33.500591   62091 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:14:33.501105   62091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:14:33.501116   62091 out.go:304] Setting ErrFile to fd 2...
	I0731 14:14:33.501123   62091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:14:33.501460   62091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 14:14:33.504111   62091 out.go:298] Setting JSON to true
	I0731 14:14:33.526680   62091 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":17041,"bootTime":1722443432,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0731 14:14:33.526756   62091 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:14:33.548106   62091 out.go:97] [download-only-543000] minikube v1.33.1 on Darwin 14.5
	I0731 14:14:33.548294   62091 notify.go:220] Checking for updates...
	I0731 14:14:33.570128   62091 out.go:169] MINIKUBE_LOCATION=19360
	I0731 14:14:33.611905   62091 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	I0731 14:14:33.633274   62091 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0731 14:14:33.655146   62091 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:14:33.675830   62091 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	W0731 14:14:33.717969   62091 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 14:14:33.718472   62091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:14:33.742920   62091 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0731 14:14:33.743223   62091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 14:14:33.825635   62091 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:69 SystemTime:2024-07-31 21:14:33.817041265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 14:14:33.847424   62091 out.go:97] Using the docker driver based on user configuration
	I0731 14:14:33.847476   62091 start.go:297] selected driver: docker
	I0731 14:14:33.847491   62091 start.go:901] validating driver "docker" against <nil>
	I0731 14:14:33.847701   62091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 14:14:33.929278   62091 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:69 SystemTime:2024-07-31 21:14:33.920382438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 14:14:33.929498   62091 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 14:14:33.932451   62091 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0731 14:14:33.932586   62091 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 14:14:33.954386   62091 out.go:169] Using Docker Desktop driver with root privileges
	I0731 14:14:33.976178   62091 cni.go:84] Creating CNI manager for ""
	I0731 14:14:33.976224   62091 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 14:14:33.976241   62091 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 14:14:33.976392   62091 start.go:340] cluster config:
	{Name:download-only-543000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:14:33.997954   62091 out.go:97] Starting "download-only-543000" primary control-plane node in "download-only-543000" cluster
	I0731 14:14:33.997999   62091 cache.go:121] Beginning downloading kic base image for docker with docker
	I0731 14:14:34.018945   62091 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0731 14:14:34.019046   62091 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 14:14:34.019115   62091 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 14:14:34.037684   62091 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 14:14:34.037856   62091 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 14:14:34.037875   62091 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 14:14:34.037881   62091 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 14:14:34.037888   62091 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 14:14:34.083065   62091 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0731 14:14:34.083130   62091 cache.go:56] Caching tarball of preloaded images
	I0731 14:14:34.084260   62091 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 14:14:34.105889   62091 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0731 14:14:34.105916   62091 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0731 14:14:34.186212   62091 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-543000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-543000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-543000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (11.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-653000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-653000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker : (11.117495572s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (11.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-653000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-653000: exit status 85 (295.335264ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-962000 | jenkins | v1.33.1 | 31 Jul 24 14:14 PDT |                     |
	|         | -p download-only-962000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 14:14 PDT | 31 Jul 24 14:14 PDT |
	| delete  | -p download-only-962000             | download-only-962000 | jenkins | v1.33.1 | 31 Jul 24 14:14 PDT | 31 Jul 24 14:14 PDT |
	| start   | -o=json --download-only             | download-only-543000 | jenkins | v1.33.1 | 31 Jul 24 14:14 PDT |                     |
	|         | -p download-only-543000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 14:14 PDT | 31 Jul 24 14:14 PDT |
	| delete  | -p download-only-543000             | download-only-543000 | jenkins | v1.33.1 | 31 Jul 24 14:14 PDT | 31 Jul 24 14:14 PDT |
	| start   | -o=json --download-only             | download-only-653000 | jenkins | v1.33.1 | 31 Jul 24 14:14 PDT |                     |
	|         | -p download-only-653000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 14:14:44
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 14:14:44.072064   62140 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:14:44.072227   62140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:14:44.072232   62140 out.go:304] Setting ErrFile to fd 2...
	I0731 14:14:44.072236   62140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:14:44.072406   62140 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 14:14:44.073907   62140 out.go:298] Setting JSON to true
	I0731 14:14:44.096066   62140 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":17052,"bootTime":1722443432,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0731 14:14:44.096154   62140 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:14:44.118338   62140 out.go:97] [download-only-653000] minikube v1.33.1 on Darwin 14.5
	I0731 14:14:44.118535   62140 notify.go:220] Checking for updates...
	I0731 14:14:44.139939   62140 out.go:169] MINIKUBE_LOCATION=19360
	I0731 14:14:44.161154   62140 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	I0731 14:14:44.181925   62140 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0731 14:14:44.203225   62140 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:14:44.225335   62140 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	W0731 14:14:44.268043   62140 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 14:14:44.268538   62140 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:14:44.292893   62140 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0731 14:14:44.293068   62140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 14:14:44.372879   62140 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:69 SystemTime:2024-07-31 21:14:44.364334607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 14:14:44.394232   62140 out.go:97] Using the docker driver based on user configuration
	I0731 14:14:44.394278   62140 start.go:297] selected driver: docker
	I0731 14:14:44.394293   62140 start.go:901] validating driver "docker" against <nil>
	I0731 14:14:44.394564   62140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 14:14:44.478006   62140 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:69 SystemTime:2024-07-31 21:14:44.469631003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 14:14:44.478191   62140 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 14:14:44.481212   62140 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0731 14:14:44.481369   62140 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 14:14:44.503242   62140 out.go:169] Using Docker Desktop driver with root privileges
	I0731 14:14:44.525000   62140 cni.go:84] Creating CNI manager for ""
	I0731 14:14:44.525046   62140 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 14:14:44.525060   62140 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 14:14:44.525230   62140 start.go:340] cluster config:
	{Name:download-only-653000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-653000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:14:44.546991   62140 out.go:97] Starting "download-only-653000" primary control-plane node in "download-only-653000" cluster
	I0731 14:14:44.547037   62140 cache.go:121] Beginning downloading kic base image for docker with docker
	I0731 14:14:44.568717   62140 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0731 14:14:44.568828   62140 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 14:14:44.568891   62140 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0731 14:14:44.587225   62140 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0731 14:14:44.587422   62140 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0731 14:14:44.587447   62140 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0731 14:14:44.587453   62140 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0731 14:14:44.587460   62140 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0731 14:14:44.622179   62140 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0731 14:14:44.622221   62140 cache.go:56] Caching tarball of preloaded images
	I0731 14:14:44.622599   62140 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 14:14:44.646883   62140 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 14:14:44.646915   62140 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0731 14:14:44.731936   62140 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0731 14:14:50.692765   62140 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0731 14:14:50.692974   62140 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0731 14:14:51.157665   62140 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 14:14:51.157904   62140 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/download-only-653000/config.json ...
	I0731 14:14:51.157926   62140 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/download-only-653000/config.json: {Name:mk0a5365e361e853e0470612ed189de707cf21ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 14:14:51.159247   62140 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 14:14:51.159642   62140 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19360-61501/.minikube/cache/darwin/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-653000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-653000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-653000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.51s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-911000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-911000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-911000
--- PASS: TestDownloadOnlyKic (1.51s)

                                                
                                    
x
+
TestBinaryMirror (1.31s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-310000 --alsologtostderr --binary-mirror http://127.0.0.1:59702 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-310000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-310000
--- PASS: TestBinaryMirror (1.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-891000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-891000: exit status 85 (189.853224ms)

                                                
                                                
-- stdout --
	* Profile "addons-891000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-891000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-891000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-891000: exit status 85 (210.822098ms)

                                                
                                                
-- stdout --
	* Profile "addons-891000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-891000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (221.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-891000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-891000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m41.065486709s)
--- PASS: TestAddons/Setup (221.07s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 18.146649ms
addons_test.go:905: volcano-admission stabilized in 18.557181ms
addons_test.go:897: volcano-scheduler stabilized in 18.58699ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-8cmtg" [3ac306ca-934c-409d-b8c7-34425d6e28e8] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.006000619s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-dvnjr" [3b802e1c-d036-4831-81ea-4f19433b1354] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003523303s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-mmcph" [bbb9a4e0-9f27-4bbb-a544-fa3fe1579cf2] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004585186s
addons_test.go:932: (dbg) Run:  kubectl --context addons-891000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-891000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-891000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [84964a3a-1fad-4dbe-b9a6-f2be42f44d0d] Pending
helpers_test.go:344: "test-job-nginx-0" [84964a3a-1fad-4dbe-b9a6-f2be42f44d0d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [84964a3a-1fad-4dbe-b9a6-f2be42f44d0d] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003400851s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-891000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-891000 addons disable volcano --alsologtostderr -v=1: (10.02990371s)
--- PASS: TestAddons/serial/Volcano (40.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-891000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-891000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kjk9h" [a7a87eb7-0439-4d03-a00e-c77e8a7ef0d2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003650378s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-891000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-891000: (5.801229351s)
--- PASS: TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 5.362234ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-wc4gr" [6a56618b-0584-4ae1-8b0f-b33b61205853] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006934365s
addons_test.go:417: (dbg) Run:  kubectl --context addons-891000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-891000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.6s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.218097ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-8z7x5" [774ebfb6-9e77-4675-a8f9-5931856f74d5] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005323373s
addons_test.go:475: (dbg) Run:  kubectl --context addons-891000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-891000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.918305159s)
addons_test.go:480: kubectl --context addons-891000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:475: (dbg) Run:  kubectl --context addons-891000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-891000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.744621448s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-891000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.60s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.022901ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-891000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-891000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a527994b-7325-48f4-bc4b-7ef673bcb5bd] Pending
helpers_test.go:344: "task-pv-pod" [a527994b-7325-48f4-bc4b-7ef673bcb5bd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a527994b-7325-48f4-bc4b-7ef673bcb5bd] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.005576346s
addons_test.go:590: (dbg) Run:  kubectl --context addons-891000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-891000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-891000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-891000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-891000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-891000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-891000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a7a8deca-c12c-434b-9c4c-a0cc57b56949] Pending
helpers_test.go:344: "task-pv-pod-restore" [a7a8deca-c12c-434b-9c4c-a0cc57b56949] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a7a8deca-c12c-434b-9c4c-a0cc57b56949] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004361019s
addons_test.go:632: (dbg) Run:  kubectl --context addons-891000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-891000 delete pod task-pv-pod-restore: (1.131094976s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-891000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-891000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-891000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-891000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.55733058s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-891000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.38s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-891000 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-891000 --alsologtostderr -v=1: (1.079106267s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-zt58h" [9a81e04f-70e3-41fc-98d8-b01daf9869f3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-zt58h" [9a81e04f-70e3-41fc-98d8-b01daf9869f3] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004483853s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-891000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-891000 addons disable headlamp --alsologtostderr -v=1: (5.707132635s)
--- PASS: TestAddons/parallel/Headlamp (17.79s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-jgn9c" [6d04c30b-a24e-4389-a65e-36f038435024] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005503508s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-891000
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-891000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-891000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-891000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [245fd36b-b32d-4117-9eb6-ed3889846559] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [245fd36b-b32d-4117-9eb6-ed3889846559] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [245fd36b-b32d-4117-9eb6-ed3889846559] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004535358s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-891000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-891000 ssh "cat /opt/local-path-provisioner/pvc-2541b9cb-b431-442f-a97c-5a8b40ca9b04_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-891000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-891000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-891000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-891000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.324276698s)
--- PASS: TestAddons/parallel/LocalPath (52.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fts42" [b72bb716-2196-441a-9e6a-f4b4fc6e5f80] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005459014s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-891000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-cpqbg" [87f7d9fc-a162-4bf6-a110-4255967e1714] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.002984265s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-891000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-891000 addons disable yakd --alsologtostderr -v=1: (5.610989722s)
--- PASS: TestAddons/parallel/Yakd (10.61s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-891000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-891000: (10.803216206s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-891000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-891000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-891000
--- PASS: TestAddons/StoppedEnableDisable (11.37s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.54s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.54s)

                                                
                                    
x
+
TestErrorSpam/setup (21.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-575000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-575000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 --driver=docker : (21.575029739s)
--- PASS: TestErrorSpam/setup (21.58s)

                                                
                                    
x
+
TestErrorSpam/start (2.22s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 start --dry-run
--- PASS: TestErrorSpam/start (2.22s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 pause
--- PASS: TestErrorSpam/pause (1.40s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 unpause
--- PASS: TestErrorSpam/unpause (1.44s)

                                                
                                    
x
+
TestErrorSpam/stop (11.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 stop: (10.748217614s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-575000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-575000 stop
--- PASS: TestErrorSpam/stop (11.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19360-61501/.minikube/files/etc/test/nested/copy/62037/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (36.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-819000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-819000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (36.536055085s)
--- PASS: TestFunctional/serial/StartWithProxy (36.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-819000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-819000 --alsologtostderr -v=8: (33.558459281s)
functional_test.go:659: soft start took 33.558980733s for "functional-819000" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-819000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-819000 cache add registry.k8s.io/pause:3.1: (1.110817382s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-819000 cache add registry.k8s.io/pause:3.3: (1.145094948s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-819000 cache add registry.k8s.io/pause:latest: (1.012517972s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-819000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1355554551/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 cache add minikube-local-cache-test:functional-819000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-819000 cache add minikube-local-cache-test:functional-819000: (1.000049006s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 cache delete minikube-local-cache-test:functional-819000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-819000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-819000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (256.132463ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 kubectl -- --context functional-819000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-819000 kubectl -- --context functional-819000 get pods: (1.173364718s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-819000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-819000 get pods: (1.490570748s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.49s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.75s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-819000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0731 14:23:40.798595   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:23:40.905646   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:23:40.915823   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:23:40.937332   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:23:40.977902   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:23:41.058195   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:23:41.218491   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:23:41.538793   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:23:42.178947   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:23:43.459204   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:23:46.019583   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:23:51.140836   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:24:01.382110   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-819000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.748881264s)
functional_test.go:757: restart took 39.748989475s for "functional-819000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.75s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-819000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 logs
E0731 14:24:21.863085   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-819000 logs: (2.874604748s)
--- PASS: TestFunctional/serial/LogsCmd (2.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd4020235983/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-819000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd4020235983/001/logs.txt: (2.915073536s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.92s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.64s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-819000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-819000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-819000: exit status 115 (386.618738ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32413 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-819000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-819000 delete -f testdata/invalidsvc.yaml: (2.116853601s)
--- PASS: TestFunctional/serial/InvalidService (5.64s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-819000 config get cpus: exit status 14 (61.287925ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-819000 config get cpus: exit status 14 (57.410502ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-819000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-819000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 63727: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.71s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-819000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-819000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (587.329973ms)

                                                
                                                
-- stdout --
	* [functional-819000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:25:14.084174   63675 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:25:14.084350   63675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:25:14.084356   63675 out.go:304] Setting ErrFile to fd 2...
	I0731 14:25:14.084360   63675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:25:14.084532   63675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 14:25:14.085910   63675 out.go:298] Setting JSON to false
	I0731 14:25:14.108269   63675 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":17681,"bootTime":1722443433,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0731 14:25:14.108363   63675 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:25:14.130143   63675 out.go:177] * [functional-819000] minikube v1.33.1 on Darwin 14.5
	I0731 14:25:14.172137   63675 notify.go:220] Checking for updates...
	I0731 14:25:14.192967   63675 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 14:25:14.213854   63675 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	I0731 14:25:14.235220   63675 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0731 14:25:14.256033   63675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:25:14.276983   63675 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	I0731 14:25:14.298233   63675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 14:25:14.320806   63675 config.go:182] Loaded profile config "functional-819000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:25:14.321603   63675 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:25:14.345056   63675 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0731 14:25:14.345217   63675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 14:25:14.431070   63675 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2024-07-31 21:25:14.421791457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 14:25:14.452850   63675 out.go:177] * Using the docker driver based on existing profile
	I0731 14:25:14.473864   63675 start.go:297] selected driver: docker
	I0731 14:25:14.473893   63675 start.go:901] validating driver "docker" against &{Name:functional-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-819000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:25:14.474028   63675 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 14:25:14.499783   63675 out.go:177] 
	W0731 14:25:14.522451   63675 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 14:25:14.542479   63675 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-819000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-819000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-819000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (657.550594ms)

                                                
                                                
-- stdout --
	* [functional-819000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19360
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:25:13.420973   63657 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:25:13.421169   63657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:25:13.421174   63657 out.go:304] Setting ErrFile to fd 2...
	I0731 14:25:13.421178   63657 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:25:13.421374   63657 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 14:25:13.423057   63657 out.go:298] Setting JSON to false
	I0731 14:25:13.446377   63657 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":17680,"bootTime":1722443433,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0731 14:25:13.446468   63657 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 14:25:13.468052   63657 out.go:177] * [functional-819000] minikube v1.33.1 sur Darwin 14.5
	I0731 14:25:13.510053   63657 out.go:177]   - MINIKUBE_LOCATION=19360
	I0731 14:25:13.510106   63657 notify.go:220] Checking for updates...
	I0731 14:25:13.552903   63657 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
	I0731 14:25:13.573969   63657 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0731 14:25:13.631985   63657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 14:25:13.652997   63657 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube
	I0731 14:25:13.694691   63657 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 14:25:13.732706   63657 config.go:182] Loaded profile config "functional-819000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:25:13.733472   63657 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 14:25:13.757137   63657 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0731 14:25:13.757315   63657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0731 14:25:13.840917   63657 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2024-07-31 21:25:13.831162159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0731 14:25:13.882855   63657 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0731 14:25:13.904086   63657 start.go:297] selected driver: docker
	I0731 14:25:13.904115   63657 start.go:901] validating driver "docker" against &{Name:functional-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-819000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 14:25:13.904242   63657 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 14:25:13.928729   63657 out.go:177] 
	W0731 14:25:13.950045   63657 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 14:25:13.986925   63657 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1c27231e-bc3a-4c50-90cb-abbd39f99987] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005867603s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-819000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-819000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-819000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-819000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [df572b01-0a2e-4418-a749-1b5eb9a72dd0] Pending
helpers_test.go:344: "sp-pod" [df572b01-0a2e-4418-a749-1b5eb9a72dd0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [df572b01-0a2e-4418-a749-1b5eb9a72dd0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004218584s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-819000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-819000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-819000 delete -f testdata/storage-provisioner/pod.yaml: (1.219867586s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-819000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d6f65a5d-6d45-435c-b528-b7492f9e017f] Pending
helpers_test.go:344: "sp-pod" [d6f65a5d-6d45-435c-b528-b7492f9e017f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d6f65a5d-6d45-435c-b528-b7492f9e017f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004122243s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-819000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh -n functional-819000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 cp functional-819000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd1794633553/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh -n functional-819000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh -n functional-819000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-819000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-njhqs" [6339782a-ca2f-4b9a-bf6a-392649b795a3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-njhqs" [6339782a-ca2f-4b9a-bf6a-392649b795a3] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.004780458s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-819000 exec mysql-64454c8b5c-njhqs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-819000 exec mysql-64454c8b5c-njhqs -- mysql -ppassword -e "show databases;": exit status 1 (119.194241ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-819000 exec mysql-64454c8b5c-njhqs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-819000 exec mysql-64454c8b5c-njhqs -- mysql -ppassword -e "show databases;": exit status 1 (114.616853ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-819000 exec mysql-64454c8b5c-njhqs -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-819000 exec mysql-64454c8b5c-njhqs -- mysql -ppassword -e "show databases;": exit status 1 (127.101195ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-819000 exec mysql-64454c8b5c-njhqs -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/62037/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "sudo cat /etc/test/nested/copy/62037/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/62037.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "sudo cat /etc/ssl/certs/62037.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/62037.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "sudo cat /usr/share/ca-certificates/62037.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/620372.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "sudo cat /etc/ssl/certs/620372.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/620372.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "sudo cat /usr/share/ca-certificates/620372.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-819000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-819000 ssh "sudo systemctl is-active crio": exit status 1 (292.011467ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-819000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-819000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-819000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 63347: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-819000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-819000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-819000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [55513d56-4dc1-4002-bf25-c4d6b9d4c1a6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [55513d56-4dc1-4002-bf25-c4d6b9d4c1a6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004168339s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-819000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-819000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 63376: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-819000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-819000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-ntfct" [dd304ad2-7d8c-4819-810a-23a332a5c8f1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-ntfct" [dd304ad2-7d8c-4819-810a-23a332a5c8f1] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004152082s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "286.392971ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "82.263983ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "294.690179ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "90.039427ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-819000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4219870338/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722461102125602000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4219870338/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722461102125602000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4219870338/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722461102125602000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4219870338/001/test-1722461102125602000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-819000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (258.807642ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 21:25 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 21:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 21:25 test-1722461102125602000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh cat /mount-9p/test-1722461102125602000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-819000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f4f6076a-049f-44bf-939d-4756035a8ae8] Pending
helpers_test.go:344: "busybox-mount" [f4f6076a-049f-44bf-939d-4756035a8ae8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f4f6076a-049f-44bf-939d-4756035a8ae8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f4f6076a-049f-44bf-939d-4756035a8ae8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.006284565s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-819000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-819000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port4219870338/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 service list -o json
E0731 14:25:02.823909   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
functional_test.go:1490: Took "672.71315ms" to run "out/minikube-darwin-amd64 -p functional-819000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-819000 service --namespace=default --https --url hello-node: signal: killed (15.003312999s)

                                                
                                                
-- stdout --
	https://127.0.0.1:60742

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:60742
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-819000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2393330122/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-819000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.514325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-819000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2393330122/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-819000 ssh "sudo umount -f /mount-9p": exit status 1 (225.817171ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-819000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-819000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2393330122/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-819000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4286924855/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-819000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4286924855/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-819000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4286924855/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-819000 ssh "findmnt -T" /mount1: exit status 1 (331.256374ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-819000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-819000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4286924855/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-819000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4286924855/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-819000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4286924855/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 service hello-node --url --format={{.IP}}
2024/07/31 14:25:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-819000 service hello-node --url --format={{.IP}}: signal: killed (15.002780026s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-819000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-819000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-819000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-819000 image ls --format short --alsologtostderr:
I0731 14:25:49.153904   63952 out.go:291] Setting OutFile to fd 1 ...
I0731 14:25:49.154200   63952 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:25:49.154206   63952 out.go:304] Setting ErrFile to fd 2...
I0731 14:25:49.154209   63952 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:25:49.154404   63952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
I0731 14:25:49.155010   63952 config.go:182] Loaded profile config "functional-819000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:25:49.155103   63952 config.go:182] Loaded profile config "functional-819000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:25:49.155517   63952 cli_runner.go:164] Run: docker container inspect functional-819000 --format={{.State.Status}}
I0731 14:25:49.176254   63952 ssh_runner.go:195] Run: systemctl --version
I0731 14:25:49.176348   63952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819000
I0731 14:25:49.196644   63952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60496 SSHKeyPath:/Users/jenkins/minikube-integration/19360-61501/.minikube/machines/functional-819000/id_rsa Username:docker}
I0731 14:25:49.283249   63952 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-819000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | 1ae23480369fa | 43.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-819000 | 63f8cc566929a | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| docker.io/kicbase/echo-server               | functional-819000 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| docker.io/library/nginx                     | latest            | a72860cb95fd5 | 188MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-819000 image ls --format table --alsologtostderr:
I0731 14:25:49.877463   63964 out.go:291] Setting OutFile to fd 1 ...
I0731 14:25:49.877769   63964 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:25:49.877775   63964 out.go:304] Setting ErrFile to fd 2...
I0731 14:25:49.877779   63964 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:25:49.877973   63964 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
I0731 14:25:49.878630   63964 config.go:182] Loaded profile config "functional-819000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:25:49.878733   63964 config.go:182] Loaded profile config "functional-819000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:25:49.879150   63964 cli_runner.go:164] Run: docker container inspect functional-819000 --format={{.State.Status}}
I0731 14:25:49.900079   63964 ssh_runner.go:195] Run: systemctl --version
I0731 14:25:49.900160   63964 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819000
I0731 14:25:49.920733   63964 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60496 SSHKeyPath:/Users/jenkins/minikube-integration/19360-61501/.minikube/machines/functional-819000/id_rsa Username:docker}
I0731 14:25:50.008587   63964 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-819000 image ls --format json --alsologtostderr:
[{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"63f8cc566929a1c4dddfe4758aa9f391148ba31e985b2dcd23a8920f779df947","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-819000"],"size":"30"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071
dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repo
Digests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-819000"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size"
:"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-819000 image ls --format json --alsologtostderr:
I0731 14:25:49.635467   63960 out.go:291] Setting OutFile to fd 1 ...
I0731 14:25:49.635756   63960 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:25:49.635762   63960 out.go:304] Setting ErrFile to fd 2...
I0731 14:25:49.635766   63960 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:25:49.635964   63960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
I0731 14:25:49.636596   63960 config.go:182] Loaded profile config "functional-819000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:25:49.636699   63960 config.go:182] Loaded profile config "functional-819000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:25:49.637102   63960 cli_runner.go:164] Run: docker container inspect functional-819000 --format={{.State.Status}}
I0731 14:25:49.656869   63960 ssh_runner.go:195] Run: systemctl --version
I0731 14:25:49.656949   63960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819000
I0731 14:25:49.677698   63960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60496 SSHKeyPath:/Users/jenkins/minikube-integration/19360-61501/.minikube/machines/functional-819000/id_rsa Username:docker}
I0731 14:25:49.765161   63960 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-819000 image ls --format yaml --alsologtostderr:
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-819000
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 63f8cc566929a1c4dddfe4758aa9f391148ba31e985b2dcd23a8920f779df947
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-819000
size: "30"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-819000 image ls --format yaml --alsologtostderr:
I0731 14:25:49.392576   63956 out.go:291] Setting OutFile to fd 1 ...
I0731 14:25:49.392875   63956 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:25:49.392882   63956 out.go:304] Setting ErrFile to fd 2...
I0731 14:25:49.392886   63956 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:25:49.393074   63956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
I0731 14:25:49.393874   63956 config.go:182] Loaded profile config "functional-819000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:25:49.393969   63956 config.go:182] Loaded profile config "functional-819000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:25:49.394383   63956 cli_runner.go:164] Run: docker container inspect functional-819000 --format={{.State.Status}}
I0731 14:25:49.414061   63956 ssh_runner.go:195] Run: systemctl --version
I0731 14:25:49.414135   63956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819000
I0731 14:25:49.436270   63956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60496 SSHKeyPath:/Users/jenkins/minikube-integration/19360-61501/.minikube/machines/functional-819000/id_rsa Username:docker}
I0731 14:25:49.524743   63956 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-819000 ssh pgrep buildkitd: exit status 1 (239.754935ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image build -t localhost/my-image:functional-819000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-819000 image build -t localhost/my-image:functional-819000 testdata/build --alsologtostderr: (2.617391454s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-819000 image build -t localhost/my-image:functional-819000 testdata/build --alsologtostderr:
I0731 14:25:50.363232   63974 out.go:291] Setting OutFile to fd 1 ...
I0731 14:25:50.364180   63974 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:25:50.364188   63974 out.go:304] Setting ErrFile to fd 2...
I0731 14:25:50.364192   63974 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 14:25:50.364387   63974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
I0731 14:25:50.365063   63974 config.go:182] Loaded profile config "functional-819000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:25:50.365824   63974 config.go:182] Loaded profile config "functional-819000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 14:25:50.366262   63974 cli_runner.go:164] Run: docker container inspect functional-819000 --format={{.State.Status}}
I0731 14:25:50.386209   63974 ssh_runner.go:195] Run: systemctl --version
I0731 14:25:50.386302   63974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-819000
I0731 14:25:50.406241   63974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60496 SSHKeyPath:/Users/jenkins/minikube-integration/19360-61501/.minikube/machines/functional-819000/id_rsa Username:docker}
I0731 14:25:50.493246   63974 build_images.go:161] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2281118794.tar
I0731 14:25:50.493381   63974 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 14:25:50.502781   63974 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2281118794.tar
I0731 14:25:50.507065   63974 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2281118794.tar: stat -c "%s %y" /var/lib/minikube/build/build.2281118794.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2281118794.tar': No such file or directory
I0731 14:25:50.507097   63974 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2281118794.tar --> /var/lib/minikube/build/build.2281118794.tar (3072 bytes)
I0731 14:25:50.532025   63974 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2281118794
I0731 14:25:50.542011   63974 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2281118794 -xf /var/lib/minikube/build/build.2281118794.tar
I0731 14:25:50.552138   63974 docker.go:360] Building image: /var/lib/minikube/build/build.2281118794
I0731 14:25:50.552245   63974 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-819000 /var/lib/minikube/build/build.2281118794
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:7f7af48f0202f259a00a163c1a744fd44beeac983c8fe0f0b52656dfc653a3e8 done
#8 naming to localhost/my-image:functional-819000 done
#8 DONE 0.0s
I0731 14:25:52.869812   63974 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-819000 /var/lib/minikube/build/build.2281118794: (2.317532252s)
I0731 14:25:52.869885   63974 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2281118794
I0731 14:25:52.879797   63974 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2281118794.tar
I0731 14:25:52.889382   63974 build_images.go:217] Built localhost/my-image:functional-819000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2281118794.tar
I0731 14:25:52.889413   63974 build_images.go:133] succeeded building to: functional-819000
I0731 14:25:52.889419   63974 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.658593387s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-819000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image load --daemon docker.io/kicbase/echo-server:functional-819000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image load --daemon docker.io/kicbase/echo-server:functional-819000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-819000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image load --daemon docker.io/kicbase/echo-server:functional-819000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image save docker.io/kicbase/echo-server:functional-819000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-819000 service hello-node --url: signal: killed (15.002026071s)

                                                
                                                
-- stdout --
	http://127.0.0.1:60874

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:60874
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image rm docker.io/kicbase/echo-server:functional-819000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-819000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 image save --daemon docker.io/kicbase/echo-server:functional-819000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-819000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-819000 docker-env) && out/minikube-darwin-amd64 status -p functional-819000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-819000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-819000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-819000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-819000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-819000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (108.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-237000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0731 14:26:24.744952   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-237000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m48.003914858s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (108.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-237000 -- rollout status deployment/busybox: (2.845914516s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-7bkfp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-pshlc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-vwkwd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-7bkfp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-pshlc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-vwkwd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-7bkfp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-pshlc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-vwkwd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-7bkfp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-7bkfp -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-pshlc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-pshlc -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-vwkwd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-237000 -- exec busybox-fc5497c4f-vwkwd -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (18.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-237000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-237000 -v=7 --alsologtostderr: (18.089989593s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (18.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-237000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp testdata/cp-test.txt ha-237000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile776527295/001/cp-test_ha-237000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000:/home/docker/cp-test.txt ha-237000-m02:/home/docker/cp-test_ha-237000_ha-237000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m02 "sudo cat /home/docker/cp-test_ha-237000_ha-237000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000:/home/docker/cp-test.txt ha-237000-m03:/home/docker/cp-test_ha-237000_ha-237000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m03 "sudo cat /home/docker/cp-test_ha-237000_ha-237000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000:/home/docker/cp-test.txt ha-237000-m04:/home/docker/cp-test_ha-237000_ha-237000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m04 "sudo cat /home/docker/cp-test_ha-237000_ha-237000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp testdata/cp-test.txt ha-237000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile776527295/001/cp-test_ha-237000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000-m02:/home/docker/cp-test.txt ha-237000:/home/docker/cp-test_ha-237000-m02_ha-237000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000 "sudo cat /home/docker/cp-test_ha-237000-m02_ha-237000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000-m02:/home/docker/cp-test.txt ha-237000-m03:/home/docker/cp-test_ha-237000-m02_ha-237000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m03 "sudo cat /home/docker/cp-test_ha-237000-m02_ha-237000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000-m02:/home/docker/cp-test.txt ha-237000-m04:/home/docker/cp-test_ha-237000-m02_ha-237000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m04 "sudo cat /home/docker/cp-test_ha-237000-m02_ha-237000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp testdata/cp-test.txt ha-237000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile776527295/001/cp-test_ha-237000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000-m03:/home/docker/cp-test.txt ha-237000:/home/docker/cp-test_ha-237000-m03_ha-237000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000 "sudo cat /home/docker/cp-test_ha-237000-m03_ha-237000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000-m03:/home/docker/cp-test.txt ha-237000-m02:/home/docker/cp-test_ha-237000-m03_ha-237000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m02 "sudo cat /home/docker/cp-test_ha-237000-m03_ha-237000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000-m03:/home/docker/cp-test.txt ha-237000-m04:/home/docker/cp-test_ha-237000-m03_ha-237000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m04 "sudo cat /home/docker/cp-test_ha-237000-m03_ha-237000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp testdata/cp-test.txt ha-237000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000-m04:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile776527295/001/cp-test_ha-237000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000-m04:/home/docker/cp-test.txt ha-237000:/home/docker/cp-test_ha-237000-m04_ha-237000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000 "sudo cat /home/docker/cp-test_ha-237000-m04_ha-237000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000-m04:/home/docker/cp-test.txt ha-237000-m02:/home/docker/cp-test_ha-237000-m04_ha-237000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m02 "sudo cat /home/docker/cp-test_ha-237000-m04_ha-237000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 cp ha-237000-m04:/home/docker/cp-test.txt ha-237000-m03:/home/docker/cp-test_ha-237000-m04_ha-237000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 ssh -n ha-237000-m03 "sudo cat /home/docker/cp-test_ha-237000-m04_ha-237000-m03.txt"
E0731 14:28:40.801079   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/CopyFile (16.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-237000 node stop m02 -v=7 --alsologtostderr: (10.706249431s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-237000 status -v=7 --alsologtostderr: exit status 7 (640.332865ms)

                                                
                                                
-- stdout --
	ha-237000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-237000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-237000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-237000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:28:51.788206   64774 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:28:51.788508   64774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:28:51.788514   64774 out.go:304] Setting ErrFile to fd 2...
	I0731 14:28:51.788518   64774 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:28:51.788711   64774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 14:28:51.788902   64774 out.go:298] Setting JSON to false
	I0731 14:28:51.788923   64774 mustload.go:65] Loading cluster: ha-237000
	I0731 14:28:51.788954   64774 notify.go:220] Checking for updates...
	I0731 14:28:51.789230   64774 config.go:182] Loaded profile config "ha-237000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:28:51.789247   64774 status.go:255] checking status of ha-237000 ...
	I0731 14:28:51.789655   64774 cli_runner.go:164] Run: docker container inspect ha-237000 --format={{.State.Status}}
	I0731 14:28:51.808161   64774 status.go:330] ha-237000 host status = "Running" (err=<nil>)
	I0731 14:28:51.808204   64774 host.go:66] Checking if "ha-237000" exists ...
	I0731 14:28:51.808474   64774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-237000
	I0731 14:28:51.827320   64774 host.go:66] Checking if "ha-237000" exists ...
	I0731 14:28:51.827567   64774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 14:28:51.827628   64774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-237000
	I0731 14:28:51.846875   64774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60953 SSHKeyPath:/Users/jenkins/minikube-integration/19360-61501/.minikube/machines/ha-237000/id_rsa Username:docker}
	I0731 14:28:51.932586   64774 ssh_runner.go:195] Run: systemctl --version
	I0731 14:28:51.937448   64774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 14:28:51.948664   64774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-237000
	I0731 14:28:51.967906   64774 kubeconfig.go:125] found "ha-237000" server: "https://127.0.0.1:60952"
	I0731 14:28:51.967938   64774 api_server.go:166] Checking apiserver status ...
	I0731 14:28:51.967988   64774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 14:28:51.978971   64774 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2418/cgroup
	W0731 14:28:51.987975   64774 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2418/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 14:28:51.988064   64774 ssh_runner.go:195] Run: ls
	I0731 14:28:51.992128   64774 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60952/healthz ...
	I0731 14:28:51.997136   64774 api_server.go:279] https://127.0.0.1:60952/healthz returned 200:
	ok
	I0731 14:28:51.997149   64774 status.go:422] ha-237000 apiserver status = Running (err=<nil>)
	I0731 14:28:51.997166   64774 status.go:257] ha-237000 status: &{Name:ha-237000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:28:51.997178   64774 status.go:255] checking status of ha-237000-m02 ...
	I0731 14:28:51.997429   64774 cli_runner.go:164] Run: docker container inspect ha-237000-m02 --format={{.State.Status}}
	I0731 14:28:52.016031   64774 status.go:330] ha-237000-m02 host status = "Stopped" (err=<nil>)
	I0731 14:28:52.016066   64774 status.go:343] host is not running, skipping remaining checks
	I0731 14:28:52.016079   64774 status.go:257] ha-237000-m02 status: &{Name:ha-237000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:28:52.016098   64774 status.go:255] checking status of ha-237000-m03 ...
	I0731 14:28:52.016401   64774 cli_runner.go:164] Run: docker container inspect ha-237000-m03 --format={{.State.Status}}
	I0731 14:28:52.034884   64774 status.go:330] ha-237000-m03 host status = "Running" (err=<nil>)
	I0731 14:28:52.034911   64774 host.go:66] Checking if "ha-237000-m03" exists ...
	I0731 14:28:52.035167   64774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-237000-m03
	I0731 14:28:52.053195   64774 host.go:66] Checking if "ha-237000-m03" exists ...
	I0731 14:28:52.053469   64774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 14:28:52.053525   64774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-237000-m03
	I0731 14:28:52.071888   64774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61058 SSHKeyPath:/Users/jenkins/minikube-integration/19360-61501/.minikube/machines/ha-237000-m03/id_rsa Username:docker}
	I0731 14:28:52.158356   64774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 14:28:52.168862   64774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-237000
	I0731 14:28:52.187868   64774 kubeconfig.go:125] found "ha-237000" server: "https://127.0.0.1:60952"
	I0731 14:28:52.187890   64774 api_server.go:166] Checking apiserver status ...
	I0731 14:28:52.187936   64774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 14:28:52.198760   64774 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2311/cgroup
	W0731 14:28:52.209084   64774 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2311/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 14:28:52.209158   64774 ssh_runner.go:195] Run: ls
	I0731 14:28:52.213117   64774 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60952/healthz ...
	I0731 14:28:52.216945   64774 api_server.go:279] https://127.0.0.1:60952/healthz returned 200:
	ok
	I0731 14:28:52.216961   64774 status.go:422] ha-237000-m03 apiserver status = Running (err=<nil>)
	I0731 14:28:52.216970   64774 status.go:257] ha-237000-m03 status: &{Name:ha-237000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:28:52.216986   64774 status.go:255] checking status of ha-237000-m04 ...
	I0731 14:28:52.217239   64774 cli_runner.go:164] Run: docker container inspect ha-237000-m04 --format={{.State.Status}}
	I0731 14:28:52.236316   64774 status.go:330] ha-237000-m04 host status = "Running" (err=<nil>)
	I0731 14:28:52.236342   64774 host.go:66] Checking if "ha-237000-m04" exists ...
	I0731 14:28:52.236589   64774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-237000-m04
	I0731 14:28:52.254893   64774 host.go:66] Checking if "ha-237000-m04" exists ...
	I0731 14:28:52.255158   64774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 14:28:52.255206   64774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-237000-m04
	I0731 14:28:52.273496   64774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61182 SSHKeyPath:/Users/jenkins/minikube-integration/19360-61501/.minikube/machines/ha-237000-m04/id_rsa Username:docker}
	I0731 14:28:52.360392   64774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 14:28:52.370835   64774 status.go:257] ha-237000-m04 status: &{Name:ha-237000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 node start m02 -v=7 --alsologtostderr
E0731 14:29:08.588680   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-237000 node start m02 -v=7 --alsologtostderr: (22.174905758s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-237000 status -v=7 --alsologtostderr: (1.36574024s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (209.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-237000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-237000 -v=7 --alsologtostderr
E0731 14:29:32.948602   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:29:32.954511   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:29:32.965640   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:29:32.986729   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:29:33.028458   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:29:33.109637   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:29:33.269818   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:29:33.591229   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:29:34.231633   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:29:35.512111   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:29:38.072695   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:29:43.193488   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-237000 -v=7 --alsologtostderr: (33.844498816s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-237000 --wait=true -v=7 --alsologtostderr
E0731 14:29:53.434324   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:30:13.915868   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:30:54.877426   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
E0731 14:32:16.798857   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-237000 --wait=true -v=7 --alsologtostderr: (2m55.766230657s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-237000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (209.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-237000 node delete m03 -v=7 --alsologtostderr: (9.557290603s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-237000 stop -v=7 --alsologtostderr: (32.45188065s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-237000 status -v=7 --alsologtostderr: exit status 7 (112.767411ms)

                                                
                                                
-- stdout --
	ha-237000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-237000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-237000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 14:33:30.457040   65466 out.go:291] Setting OutFile to fd 1 ...
	I0731 14:33:30.457313   65466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:33:30.457319   65466 out.go:304] Setting ErrFile to fd 2...
	I0731 14:33:30.457323   65466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 14:33:30.457500   65466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19360-61501/.minikube/bin
	I0731 14:33:30.457678   65466 out.go:298] Setting JSON to false
	I0731 14:33:30.457699   65466 mustload.go:65] Loading cluster: ha-237000
	I0731 14:33:30.457751   65466 notify.go:220] Checking for updates...
	I0731 14:33:30.458030   65466 config.go:182] Loaded profile config "ha-237000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 14:33:30.458046   65466 status.go:255] checking status of ha-237000 ...
	I0731 14:33:30.458433   65466 cli_runner.go:164] Run: docker container inspect ha-237000 --format={{.State.Status}}
	I0731 14:33:30.476570   65466 status.go:330] ha-237000 host status = "Stopped" (err=<nil>)
	I0731 14:33:30.476612   65466 status.go:343] host is not running, skipping remaining checks
	I0731 14:33:30.476621   65466 status.go:257] ha-237000 status: &{Name:ha-237000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:33:30.476651   65466 status.go:255] checking status of ha-237000-m02 ...
	I0731 14:33:30.476928   65466 cli_runner.go:164] Run: docker container inspect ha-237000-m02 --format={{.State.Status}}
	I0731 14:33:30.494296   65466 status.go:330] ha-237000-m02 host status = "Stopped" (err=<nil>)
	I0731 14:33:30.494318   65466 status.go:343] host is not running, skipping remaining checks
	I0731 14:33:30.494325   65466 status.go:257] ha-237000-m02 status: &{Name:ha-237000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 14:33:30.494341   65466 status.go:255] checking status of ha-237000-m04 ...
	I0731 14:33:30.494594   65466 cli_runner.go:164] Run: docker container inspect ha-237000-m04 --format={{.State.Status}}
	I0731 14:33:30.512584   65466 status.go:330] ha-237000-m04 host status = "Stopped" (err=<nil>)
	I0731 14:33:30.512607   65466 status.go:343] host is not running, skipping remaining checks
	I0731 14:33:30.512616   65466 status.go:257] ha-237000-m04 status: &{Name:ha-237000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (86.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-237000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0731 14:33:40.803458   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
E0731 14:34:32.949985   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-237000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m25.579720559s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (86.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-237000 --control-plane -v=7 --alsologtostderr
E0731 14:35:00.662855   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-237000 --control-plane -v=7 --alsologtostderr: (34.631664864s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-237000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-395000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-395000 --driver=docker : (20.822402224s)
--- PASS: TestImageBuild/serial/Setup (20.82s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-395000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-395000: (1.761888941s)
--- PASS: TestImageBuild/serial/NormalBuild (1.76s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-395000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-395000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-395000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-839000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-839000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m15.243577049s)
--- PASS: TestJSONOutput/start/Command (75.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-839000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-839000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-839000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-839000 --output=json --user=testUser: (5.709933369s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.58s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-723000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-723000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (360.534591ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"12bfd91e-42af-4759-a8e6-99bef01e2d44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-723000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"61671b1b-d062-4ade-b4dc-4271ea1fe64b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19360"}}
	{"specversion":"1.0","id":"9ac9d8f9-b563-45dc-a65a-0fa824a9f1d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig"}}
	{"specversion":"1.0","id":"daa62ee4-c347-471e-b73c-fc6445b5f3b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"8ce922ac-4f52-42af-810a-57c24df2ac76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8d285416-b294-452c-8fe8-1bcc39c0c06c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19360-61501/.minikube"}}
	{"specversion":"1.0","id":"cf483c2b-abf1-4a89-b857-5dcbfa60f0ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f4f1ee10-67af-4769-88f2-f767fc4eee9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-723000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-723000
--- PASS: TestErrorJSONOutput (0.58s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.12s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-362000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-362000 --network=: (21.310314904s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-362000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-362000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-362000: (1.785903486s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.12s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-156000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-156000 --network=bridge: (20.844914365s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-156000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-156000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-156000: (1.746780788s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.61s)

                                                
                                    
x
+
TestKicExistingNetwork (22.31s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-813000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-813000 --network=existing-network: (20.437566875s)
helpers_test.go:175: Cleaning up "existing-network-813000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-813000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-813000: (1.702918006s)
--- PASS: TestKicExistingNetwork (22.31s)

                                                
                                    
x
+
TestKicCustomSubnet (22.41s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-274000 --subnet=192.168.60.0/24
E0731 14:38:40.838991   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-274000 --subnet=192.168.60.0/24: (20.597211417s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-274000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-274000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-274000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-274000: (1.794473386s)
--- PASS: TestKicCustomSubnet (22.41s)

                                                
                                    
x
+
TestKicStaticIP (22.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-119000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-119000 --static-ip=192.168.200.200: (20.678279469s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-119000 ip
helpers_test.go:175: Cleaning up "static-ip-119000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-119000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-119000: (1.801847678s)
--- PASS: TestKicStaticIP (22.66s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (48.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-007000 --driver=docker 
E0731 14:39:32.985232   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/functional-819000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-007000 --driver=docker : (21.495913332s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-009000 --driver=docker 
E0731 14:40:03.988717   62037 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19360-61501/.minikube/profiles/addons-891000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-009000 --driver=docker : (21.527525383s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-007000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-009000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-009000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-009000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-009000: (1.991354728s)
helpers_test.go:175: Cleaning up "first-007000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-007000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-007000: (1.965828324s)
--- PASS: TestMinikubeProfile (48.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-455000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-455000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.043171015s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.04s)

                                                
                                    
x
+
TestPreload (134.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-504000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-504000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m40.711891103s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-504000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-504000 image pull gcr.io/k8s-minikube/busybox: (1.491141089s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-504000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-504000: (10.678976421s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-504000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-504000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (18.87913515s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-504000 image list
helpers_test.go:175: Cleaning up "test-preload-504000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-504000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-504000: (2.035159859s)
--- PASS: TestPreload (134.10s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.23s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19360
- KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2175238968/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2175238968/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2175238968/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2175238968/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.23s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.43s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19360
- KUBECONFIG=/Users/jenkins/minikube-integration/19360-61501/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2837004089/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2837004089/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2837004089/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2837004089/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.43s)

                                                
                                    

Test skip (19/210)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.446109ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-drmcz" [03307f00-eccc-4449-9171-a0369019cb0d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005409s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vcf28" [7994a10c-5c02-4605-9d12-3fc87307c101] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005164012s
addons_test.go:342: (dbg) Run:  kubectl --context addons-891000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-891000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-891000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.111034895s)
addons_test.go:357: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (13.18s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-891000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-891000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-891000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [44e6a524-0db4-46a0-ae28-b35ba8e46efb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [44e6a524-0db4-46a0-ae28-b35ba8e46efb] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005358531s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-891000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.77s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-819000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-819000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-qjrxx" [a319b9c3-3b1c-4fbe-95d0-573f01fccd7a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-qjrxx" [a319b9c3-3b1c-4fbe-95d0-573f01fccd7a] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.002955614s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard