Test Report: Docker_macOS 19338

                    
                      0eb0b855c9cd12df3081fe3f67aa770440dcda12:2024-07-29:35550
                    
                

Test fail (22/212)

x
+
TestOffline (753.1s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-206000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-206000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m32.553750541s)

                                                
                                                
-- stdout --
	* [offline-docker-206000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-206000" primary control-plane node in "offline-docker-206000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-206000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:55:24.899352   23392 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:55:24.899936   23392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:55:24.899948   23392 out.go:304] Setting ErrFile to fd 2...
	I0729 11:55:24.899955   23392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:55:24.900590   23392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:55:24.902307   23392 out.go:298] Setting JSON to false
	I0729 11:55:24.926101   23392 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10494,"bootTime":1722268830,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 11:55:24.926197   23392 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:55:24.947602   23392 out.go:177] * [offline-docker-206000] minikube v1.33.1 on Darwin 14.5
	I0729 11:55:24.989125   23392 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 11:55:24.989147   23392 notify.go:220] Checking for updates...
	I0729 11:55:25.031209   23392 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	I0729 11:55:25.052046   23392 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 11:55:25.073217   23392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:55:25.094246   23392 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	I0729 11:55:25.115083   23392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:55:25.137371   23392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:55:25.160937   23392 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 11:55:25.161126   23392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:55:25.243817   23392 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:false NGoroutines:170 SystemTime:2024-07-29 18:55:25.234598593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 11:55:25.286283   23392 out.go:177] * Using the docker driver based on user configuration
	I0729 11:55:25.307574   23392 start.go:297] selected driver: docker
	I0729 11:55:25.307602   23392 start.go:901] validating driver "docker" against <nil>
	I0729 11:55:25.307617   23392 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:55:25.312214   23392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:55:25.407525   23392 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:false NGoroutines:170 SystemTime:2024-07-29 18:55:25.39815733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 11:55:25.407757   23392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:55:25.407985   23392 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:55:25.429384   23392 out.go:177] * Using Docker Desktop driver with root privileges
	I0729 11:55:25.450308   23392 cni.go:84] Creating CNI manager for ""
	I0729 11:55:25.450329   23392 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 11:55:25.450336   23392 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 11:55:25.450396   23392 start.go:340] cluster config:
	{Name:offline-docker-206000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-206000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:55:25.471291   23392 out.go:177] * Starting "offline-docker-206000" primary control-plane node in "offline-docker-206000" cluster
	I0729 11:55:25.513554   23392 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 11:55:25.555349   23392 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 11:55:25.597382   23392 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:55:25.597438   23392 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 11:55:25.597466   23392 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 11:55:25.597499   23392 cache.go:56] Caching tarball of preloaded images
	I0729 11:55:25.597731   23392 preload.go:172] Found /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 11:55:25.597756   23392 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 11:55:25.599156   23392 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/offline-docker-206000/config.json ...
	I0729 11:55:25.599269   23392 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/offline-docker-206000/config.json: {Name:mk94c424a964b2961e5ced1b5ebb2a338ec3f42f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0729 11:55:25.791307   23392 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 11:55:25.791320   23392 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 11:55:25.791442   23392 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 11:55:25.791459   23392 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 11:55:25.791465   23392 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 11:55:25.791473   23392 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 11:55:25.791478   23392 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 11:55:26.030523   23392 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 11:55:26.030569   23392 cache.go:194] Successfully downloaded all kic artifacts
	I0729 11:55:26.030619   23392 start.go:360] acquireMachinesLock for offline-docker-206000: {Name:mk795797ef00ef52f45866f0672804bef694bc1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:55:26.030798   23392 start.go:364] duration metric: took 166.738µs to acquireMachinesLock for "offline-docker-206000"
	I0729 11:55:26.030827   23392 start.go:93] Provisioning new machine with config: &{Name:offline-docker-206000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-206000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:55:26.030894   23392 start.go:125] createHost starting for "" (driver="docker")
	I0729 11:55:26.072734   23392 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 11:55:26.072934   23392 start.go:159] libmachine.API.Create for "offline-docker-206000" (driver="docker")
	I0729 11:55:26.072960   23392 client.go:168] LocalClient.Create starting
	I0729 11:55:26.073072   23392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 11:55:26.073123   23392 main.go:141] libmachine: Decoding PEM data...
	I0729 11:55:26.073141   23392 main.go:141] libmachine: Parsing certificate...
	I0729 11:55:26.073208   23392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 11:55:26.073256   23392 main.go:141] libmachine: Decoding PEM data...
	I0729 11:55:26.073267   23392 main.go:141] libmachine: Parsing certificate...
	I0729 11:55:26.073835   23392 cli_runner.go:164] Run: docker network inspect offline-docker-206000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 11:55:26.141010   23392 cli_runner.go:211] docker network inspect offline-docker-206000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 11:55:26.141113   23392 network_create.go:284] running [docker network inspect offline-docker-206000] to gather additional debugging logs...
	I0729 11:55:26.141126   23392 cli_runner.go:164] Run: docker network inspect offline-docker-206000
	W0729 11:55:26.165543   23392 cli_runner.go:211] docker network inspect offline-docker-206000 returned with exit code 1
	I0729 11:55:26.165575   23392 network_create.go:287] error running [docker network inspect offline-docker-206000]: docker network inspect offline-docker-206000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-206000 not found
	I0729 11:55:26.165591   23392 network_create.go:289] output of [docker network inspect offline-docker-206000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-206000 not found
	
	** /stderr **
	I0729 11:55:26.165728   23392 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:55:26.185250   23392 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:55:26.186738   23392 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:55:26.187113   23392 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001682dd0}
	I0729 11:55:26.187128   23392 network_create.go:124] attempt to create docker network offline-docker-206000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0729 11:55:26.187202   23392 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-206000 offline-docker-206000
	I0729 11:55:26.252409   23392 network_create.go:108] docker network offline-docker-206000 192.168.67.0/24 created
	I0729 11:55:26.252448   23392 kic.go:121] calculated static IP "192.168.67.2" for the "offline-docker-206000" container
	I0729 11:55:26.252570   23392 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 11:55:26.272940   23392 cli_runner.go:164] Run: docker volume create offline-docker-206000 --label name.minikube.sigs.k8s.io=offline-docker-206000 --label created_by.minikube.sigs.k8s.io=true
	I0729 11:55:26.292825   23392 oci.go:103] Successfully created a docker volume offline-docker-206000
	I0729 11:55:26.292971   23392 cli_runner.go:164] Run: docker run --rm --name offline-docker-206000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-206000 --entrypoint /usr/bin/test -v offline-docker-206000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 11:55:26.861327   23392 oci.go:107] Successfully prepared a docker volume offline-docker-206000
	I0729 11:55:26.861399   23392 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:55:26.861418   23392 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 11:55:26.861540   23392 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-206000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 12:01:26.077445   23392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:01:26.077605   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:01:26.097243   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:01:26.097362   23392 retry.go:31] will retry after 221.181797ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:26.318838   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:01:26.344084   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:01:26.344203   23392 retry.go:31] will retry after 418.674525ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:26.763744   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:01:26.783331   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:01:26.783439   23392 retry.go:31] will retry after 689.696188ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:27.473346   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:01:27.491521   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	W0729 12:01:27.491626   23392 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	
	W0729 12:01:27.491649   23392 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:27.491704   23392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 12:01:27.491768   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:01:27.509688   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:01:27.509787   23392 retry.go:31] will retry after 191.304941ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:27.701407   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:01:27.719352   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:01:27.719453   23392 retry.go:31] will retry after 290.607396ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:28.010402   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:01:28.029545   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:01:28.029640   23392 retry.go:31] will retry after 479.753983ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:28.509917   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:01:28.529510   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:01:28.529601   23392 retry.go:31] will retry after 697.337988ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:29.227195   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:01:29.244831   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	W0729 12:01:29.244931   23392 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	
	W0729 12:01:29.244955   23392 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:29.244976   23392 start.go:128] duration metric: took 6m3.214352272s to createHost
	I0729 12:01:29.244985   23392 start.go:83] releasing machines lock for "offline-docker-206000", held for 6m3.214463488s
	W0729 12:01:29.245000   23392 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0729 12:01:29.245509   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:29.263165   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	I0729 12:01:29.263218   23392 delete.go:82] Unable to get host status for offline-docker-206000, assuming it has already been deleted: state: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	W0729 12:01:29.263297   23392 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0729 12:01:29.263309   23392 start.go:729] Will try again in 5 seconds ...
	I0729 12:01:34.264002   23392 start.go:360] acquireMachinesLock for offline-docker-206000: {Name:mk795797ef00ef52f45866f0672804bef694bc1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:01:34.264199   23392 start.go:364] duration metric: took 150.187µs to acquireMachinesLock for "offline-docker-206000"
	I0729 12:01:34.264233   23392 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:01:34.264248   23392 fix.go:54] fixHost starting: 
	I0729 12:01:34.264691   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:34.284718   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	I0729 12:01:34.284763   23392 fix.go:112] recreateIfNeeded on offline-docker-206000: state= err=unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:34.284786   23392 fix.go:117] machineExists: false. err=machine does not exist
	I0729 12:01:34.347253   23392 out.go:177] * docker "offline-docker-206000" container is missing, will recreate.
	I0729 12:01:34.369508   23392 delete.go:124] DEMOLISHING offline-docker-206000 ...
	I0729 12:01:34.369600   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:34.387504   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	W0729 12:01:34.387555   23392 stop.go:83] unable to get state: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:34.387570   23392 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:34.387946   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:34.405609   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	I0729 12:01:34.405662   23392 delete.go:82] Unable to get host status for offline-docker-206000, assuming it has already been deleted: state: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:34.405768   23392 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-206000
	W0729 12:01:34.423116   23392 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-206000 returned with exit code 1
	I0729 12:01:34.423161   23392 kic.go:371] could not find the container offline-docker-206000 to remove it. will try anyways
	I0729 12:01:34.423227   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:34.440087   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	W0729 12:01:34.440130   23392 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:34.440212   23392 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-206000 /bin/bash -c "sudo init 0"
	W0729 12:01:34.458501   23392 cli_runner.go:211] docker exec --privileged -t offline-docker-206000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 12:01:34.458532   23392 oci.go:650] error shutdown offline-docker-206000: docker exec --privileged -t offline-docker-206000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:35.458911   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:35.484383   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	I0729 12:01:35.484433   23392 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:35.484443   23392 oci.go:664] temporary error: container offline-docker-206000 status is  but expect it to be exited
	I0729 12:01:35.484481   23392 retry.go:31] will retry after 465.728038ms: couldn't verify container is exited. %v: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:35.952015   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:35.970950   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	I0729 12:01:35.970999   23392 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:35.971014   23392 oci.go:664] temporary error: container offline-docker-206000 status is  but expect it to be exited
	I0729 12:01:35.971040   23392 retry.go:31] will retry after 550.987003ms: couldn't verify container is exited. %v: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:36.523085   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:36.541974   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	I0729 12:01:36.542026   23392 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:36.542038   23392 oci.go:664] temporary error: container offline-docker-206000 status is  but expect it to be exited
	I0729 12:01:36.542063   23392 retry.go:31] will retry after 640.419387ms: couldn't verify container is exited. %v: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:37.184884   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:37.203832   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	I0729 12:01:37.203880   23392 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:37.203890   23392 oci.go:664] temporary error: container offline-docker-206000 status is  but expect it to be exited
	I0729 12:01:37.203920   23392 retry.go:31] will retry after 1.232576778s: couldn't verify container is exited. %v: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:38.436974   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:38.458637   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	I0729 12:01:38.458683   23392 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:38.458692   23392 oci.go:664] temporary error: container offline-docker-206000 status is  but expect it to be exited
	I0729 12:01:38.458719   23392 retry.go:31] will retry after 3.566693932s: couldn't verify container is exited. %v: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:42.025800   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:42.045648   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	I0729 12:01:42.045695   23392 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:42.045704   23392 oci.go:664] temporary error: container offline-docker-206000 status is  but expect it to be exited
	I0729 12:01:42.045729   23392 retry.go:31] will retry after 2.614420974s: couldn't verify container is exited. %v: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:44.661497   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:44.682007   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	I0729 12:01:44.682064   23392 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:44.682073   23392 oci.go:664] temporary error: container offline-docker-206000 status is  but expect it to be exited
	I0729 12:01:44.682098   23392 retry.go:31] will retry after 5.392094976s: couldn't verify container is exited. %v: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:50.074398   23392 cli_runner.go:164] Run: docker container inspect offline-docker-206000 --format={{.State.Status}}
	W0729 12:01:50.093830   23392 cli_runner.go:211] docker container inspect offline-docker-206000 --format={{.State.Status}} returned with exit code 1
	I0729 12:01:50.093879   23392 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:01:50.093894   23392 oci.go:664] temporary error: container offline-docker-206000 status is  but expect it to be exited
	I0729 12:01:50.093928   23392 oci.go:88] couldn't shut down offline-docker-206000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	 
	I0729 12:01:50.093990   23392 cli_runner.go:164] Run: docker rm -f -v offline-docker-206000
	I0729 12:01:50.112044   23392 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-206000
	W0729 12:01:50.129114   23392 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-206000 returned with exit code 1
	I0729 12:01:50.129213   23392 cli_runner.go:164] Run: docker network inspect offline-docker-206000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 12:01:50.147123   23392 cli_runner.go:164] Run: docker network rm offline-docker-206000
	I0729 12:01:50.229033   23392 fix.go:124] Sleeping 1 second for extra luck!
	I0729 12:01:51.229838   23392 start.go:125] createHost starting for "" (driver="docker")
	I0729 12:01:51.258557   23392 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 12:01:51.258735   23392 start.go:159] libmachine.API.Create for "offline-docker-206000" (driver="docker")
	I0729 12:01:51.258764   23392 client.go:168] LocalClient.Create starting
	I0729 12:01:51.258979   23392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 12:01:51.259080   23392 main.go:141] libmachine: Decoding PEM data...
	I0729 12:01:51.259107   23392 main.go:141] libmachine: Parsing certificate...
	I0729 12:01:51.259194   23392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 12:01:51.259290   23392 main.go:141] libmachine: Decoding PEM data...
	I0729 12:01:51.259306   23392 main.go:141] libmachine: Parsing certificate...
	I0729 12:01:51.260029   23392 cli_runner.go:164] Run: docker network inspect offline-docker-206000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 12:01:51.279069   23392 cli_runner.go:211] docker network inspect offline-docker-206000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 12:01:51.279174   23392 network_create.go:284] running [docker network inspect offline-docker-206000] to gather additional debugging logs...
	I0729 12:01:51.279189   23392 cli_runner.go:164] Run: docker network inspect offline-docker-206000
	W0729 12:01:51.297230   23392 cli_runner.go:211] docker network inspect offline-docker-206000 returned with exit code 1
	I0729 12:01:51.297269   23392 network_create.go:287] error running [docker network inspect offline-docker-206000]: docker network inspect offline-docker-206000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-206000 not found
	I0729 12:01:51.297287   23392 network_create.go:289] output of [docker network inspect offline-docker-206000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-206000 not found
	
	** /stderr **
	I0729 12:01:51.297420   23392 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 12:01:51.317552   23392 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:01:51.319128   23392 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:01:51.320724   23392 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:01:51.322271   23392 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:01:51.323642   23392 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:01:51.324160   23392 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001717b30}
	I0729 12:01:51.324172   23392 network_create.go:124] attempt to create docker network offline-docker-206000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0729 12:01:51.324252   23392 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-206000 offline-docker-206000
	I0729 12:01:51.389977   23392 network_create.go:108] docker network offline-docker-206000 192.168.94.0/24 created
	I0729 12:01:51.390015   23392 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-206000" container
	I0729 12:01:51.390126   23392 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 12:01:51.409502   23392 cli_runner.go:164] Run: docker volume create offline-docker-206000 --label name.minikube.sigs.k8s.io=offline-docker-206000 --label created_by.minikube.sigs.k8s.io=true
	I0729 12:01:51.426889   23392 oci.go:103] Successfully created a docker volume offline-docker-206000
	I0729 12:01:51.427000   23392 cli_runner.go:164] Run: docker run --rm --name offline-docker-206000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-206000 --entrypoint /usr/bin/test -v offline-docker-206000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 12:01:51.700211   23392 oci.go:107] Successfully prepared a docker volume offline-docker-206000
	I0729 12:01:51.700269   23392 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 12:01:51.700287   23392 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 12:01:51.700405   23392 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-206000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 12:07:51.328092   23392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:07:51.328326   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:51.347786   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:51.347897   23392 retry.go:31] will retry after 195.360963ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:51.545712   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:51.566522   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:51.566638   23392 retry.go:31] will retry after 258.075073ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:51.827162   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:51.847247   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:51.847356   23392 retry.go:31] will retry after 541.287042ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:52.391175   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:52.411561   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	W0729 12:07:52.411666   23392 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	
	W0729 12:07:52.411684   23392 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:52.411770   23392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 12:07:52.411829   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:52.429481   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:52.429588   23392 retry.go:31] will retry after 202.549053ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:52.633262   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:52.652393   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:52.652510   23392 retry.go:31] will retry after 321.558984ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:52.974512   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:52.994461   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:52.994566   23392 retry.go:31] will retry after 474.219452ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:53.471264   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:53.491544   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:53.491642   23392 retry.go:31] will retry after 901.703009ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:54.394797   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:54.415827   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	W0729 12:07:54.415936   23392 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	
	W0729 12:07:54.415962   23392 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:54.415974   23392 start.go:128] duration metric: took 6m3.117098274s to createHost
	I0729 12:07:54.416044   23392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:07:54.416106   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:54.433389   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:54.433487   23392 retry.go:31] will retry after 125.038289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:54.560959   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:54.579665   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:54.579760   23392 retry.go:31] will retry after 485.014782ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:55.065459   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:55.085328   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:55.085424   23392 retry.go:31] will retry after 415.168837ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:55.502949   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:55.522416   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:55.522525   23392 retry.go:31] will retry after 854.661869ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:56.379225   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:56.398532   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	W0729 12:07:56.398645   23392 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	
	W0729 12:07:56.398668   23392 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:56.398733   23392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 12:07:56.398795   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:56.416500   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:56.416593   23392 retry.go:31] will retry after 204.593225ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:56.623588   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:56.642898   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:56.643008   23392 retry.go:31] will retry after 320.690813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:56.966110   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:56.986197   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	I0729 12:07:56.986300   23392 retry.go:31] will retry after 295.92366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:57.282724   23392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000
	W0729 12:07:57.302506   23392 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000 returned with exit code 1
	W0729 12:07:57.302625   23392 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	
	W0729 12:07:57.302644   23392 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-206000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-206000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000
	I0729 12:07:57.302655   23392 fix.go:56] duration metric: took 6m22.969436845s for fixHost
	I0729 12:07:57.302661   23392 start.go:83] releasing machines lock for "offline-docker-206000", held for 6m22.96947844s
	W0729 12:07:57.302736   23392 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-206000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-206000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 12:07:57.346706   23392 out.go:177] 
	W0729 12:07:57.368721   23392 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 12:07:57.368792   23392 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0729 12:07:57.368835   23392 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0729 12:07:57.412767   23392 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-206000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-07-29 12:07:57.476035 -0700 PDT m=+6142.596096145
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-206000
helpers_test.go:235: (dbg) docker inspect offline-docker-206000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-206000",
	        "Id": "83879f08235a13371483be7c4421af3f7f04289abb3cdbc5551c60b2fb25b951",
	        "Created": "2024-07-29T19:01:51.340076059Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-206000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-206000 -n offline-docker-206000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-206000 -n offline-docker-206000: exit status 7 (75.730068ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:07:57.571098   24113 status.go:249] status error: host: state: unknown state "offline-docker-206000": docker container inspect offline-docker-206000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-206000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-206000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-206000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-206000
--- FAIL: TestOffline (753.10s)

                                                
                                    
x
+
TestCertOptions (7201.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-713000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0729 12:24:56.043877   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 12:25:31.088600   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (5m7s)
	TestCertOptions (4m16s)
	TestNetworkPlugins (30m10s)

                                                
                                                
goroutine 2596 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000775520, 0xc0007d1bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000010558, {0xf1f3ae0, 0x2a, 0x2a}, {0xacc9825?, 0xc802f89?, 0xf216aa0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000537e00)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000537e00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000768600)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 188 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xde92820, 0xc000066060}, 0xc001562750, 0xc00144cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xde92820, 0xc000066060}, 0x0?, 0xc001562750, 0xc001562798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xde92820?, 0xc000066060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 193
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2595 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc001960600, 0xc0000675c0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 691
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 176 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000d01080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 175
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 193 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00089da00, 0xc000066060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 175
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2248 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000bc1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000bc1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0000bc1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc0000bc1a0, 0xde62960)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 26 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 25
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 2269 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00152eb60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152eb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00152eb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00152eb60, 0xc000cbe880)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2198 [chan receive, 31 minutes]:
testing.(*T).Run(0xc00152e000, {0xc7a94ca?, 0x89d6fcfbd4b?}, 0xc000d3e0c0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00152e000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00152e000, 0xde62918)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1425 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc0014dbc20)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1416
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 1891 [syscall, 97 minutes]:
syscall.syscall(0x0?, 0xc001e03320?, 0xc001564fb0?, 0xad83c95?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0x1?, 0x1?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1880
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 2199 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00152e1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152e1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc00152e1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc00152e1a0, 0xde62920)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1263 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001960900, 0xc00010ea80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1262
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 691 [syscall, 4 minutes]:
syscall.syscall6(0xc001a7bf80?, 0x1000000000010?, 0x10000000019?, 0x56c46690?, 0x90?, 0xfb38108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0015498a0?, 0xac0a0c5?, 0x90?, 0xddcee80?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xad3a9e5?, 0xc0015498d4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000896de0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001960600)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001960600)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001d52340, 0xc001960600)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc001d52340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc001d52340, 0xde62838)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1025 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00197e240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 898
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2289 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00152f1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152f1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00152f1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00152f1e0, 0xc000cbeb00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 187 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00089d9d0, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xd957700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000d00f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00089da00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00098a000, {0xde6eb20, 0xc000830210}, 0x1, 0xc000066060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00098a000, 0x3b9aca00, 0x0, 0x1, 0xc000066060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 193
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 189 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 188
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2273 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000bc820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000bc820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0000bc820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc0000bc820, 0xde62940)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2291 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00152f520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00152f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00152f520, 0xc000cbec00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2272 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00152f040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00152f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00152f040, 0xc000cbea80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2567 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x56afa568, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00197e7e0?, 0xc0009a6e00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00197e7e0, {0xc0009a6e00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00073c3c0, {0xc0009a6e00?, 0x569c9fa8?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001a7a2d0, {0xde6d538, 0xc000c1ae10})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xde6d678, 0xc001a7a2d0}, {0xde6d538, 0xc000c1ae10}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xf127980?, {0xde6d678, 0xc001a7a2d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00152f040?, {0xde6d678?, 0xc001a7a2d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xde6d678, 0xc001a7a2d0}, {0xde6d5f8, 0xc00073c3c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000cbea80?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 692
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 692 [syscall, 5 minutes]:
syscall.syscall6(0xc001a7bf80?, 0x1000000000010?, 0x10000000019?, 0x56c46690?, 0x90?, 0xfb38108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000c67a40?, 0xac0a0c5?, 0x90?, 0xddcee80?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xad3a9e5?, 0xc000c67a74, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0008967e0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001960180)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001960180)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001d524e0, 0xc001960180)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc001d524e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc001d524e0, 0xde62830)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1330 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc0014cd680, 0xc001d77920)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 869
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2290 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00152f380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152f380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00152f380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00152f380, 0xc000cbeb80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2275 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000bd380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000bd380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0000bd380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc0000bd380, 0xde628e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2274 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000bcb60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000bcb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0000bcb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc0000bcb60, 0xde62968)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2268 [chan receive, 31 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00152e4e0, 0xc000d3e0c0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2198
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1026 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001499e40, 0xc000066060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 898
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2566 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x56afab38, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00197e480?, 0xc000ce5298?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00197e480, {0xc000ce5298, 0x568, 0x568})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00073c270, {0xc000ce5298?, 0xc00187c1c0?, 0x22e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001a7a2a0, {0xde6d538, 0xc000c1ade0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xde6d678, 0xc001a7a2a0}, {0xde6d538, 0xc000c1ade0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001565678?, {0xde6d678, 0xc001a7a2a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001565738?, {0xde6d678?, 0xc001a7a2a0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xde6d678, 0xc001a7a2a0}, {0xde6d5f8, 0xc00073c270}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001a42420?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 692
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2292 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00152f6c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00152f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00152f6c0, 0xc000cbec80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1408 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc0014dbc20)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1416
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2270 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00152ed00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00152ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00152ed00, 0xc000cbe900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 774 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0x56afad28, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001b1ae00?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc001b1ae00)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc001b1ae00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000426540)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000426540)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0000240f0, {0xde85710, 0xc000426540})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0000240f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x0?, 0xc001d53d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 771
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2276 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0000bd520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000bd520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc0000bd520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc0000bd520, 0xde628f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2200 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00152e820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc00152e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc00152e820, 0xde62930)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1013 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001499e10, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xd957700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00197e120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001499e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0004b0b40, {0xde6eb20, 0xc001ac0750}, 0x1, 0xc000066060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004b0b40, 0x3b9aca00, 0x0, 0x1, 0xc000066060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1026
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1014 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xde92820, 0xc000066060}, 0xc001565f50, 0xc000727f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xde92820, 0xc000066060}, 0x11?, 0xc001565f50, 0xc001565f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xde92820?, 0xc000066060?}, 0xc00145aea0?, 0xad3d6a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001565fd0?, 0xad839a4?, 0xc001498f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1026
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1015 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1014
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2293 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00152f860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00152f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00152f860, 0xc000cbed00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1352 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc0016cc180, 0xc001dc82a0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1351
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1143 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc001961500, 0xc001a3c3c0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1142
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2271 [chan receive, 31 minutes]:
testing.(*testContext).waitParallel(0xc00061a280)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00152eea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152eea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00152eea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00152eea0, 0xc000cbea00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2268
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2568 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc001960180, 0xc0000673e0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 692
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2593 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x56afaa40, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00197ed20?, 0xc000bea28f?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00197ed20, {0xc000bea28f, 0x571, 0x571})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00073c668, {0xc000bea28f?, 0xc001633180?, 0x225?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001a7a540, {0xde6d538, 0xc000c1acd0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xde6d678, 0xc001a7a540}, {0xde6d538, 0xc000c1acd0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001565e78?, {0xde6d678, 0xc001a7a540})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001565f38?, {0xde6d678?, 0xc001a7a540?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xde6d678, 0xc001a7a540}, {0xde6d5f8, 0xc00073c668}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000067500?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 691
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2594 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x56afae20, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00197ede0?, 0xc00057de00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00197ede0, {0xc00057de00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00073c6b8, {0xc00057de00?, 0xc001632c40?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001a7a570, {0xde6d538, 0xc000c1ad00})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xde6d678, 0xc001a7a570}, {0xde6d538, 0xc000c1ad00}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0019d5e60?, {0xde6d678, 0xc001a7a570})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xac081d6?, {0xde6d678?, 0xc001a7a570?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xde6d678, 0xc001a7a570}, {0xde6d5f8, 0xc00073c6b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000cbee00?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 691
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                    
x
+
TestDockerFlags (757.9s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-212000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0729 12:09:56.019038   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 12:10:31.064274   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 12:14:39.076900   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 12:14:56.019569   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 12:15:31.063417   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 12:19:56.019056   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 12:20:14.137202   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-212000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m37.03404845s)

                                                
                                                
-- stdout --
	* [docker-flags-212000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-212000" primary control-plane node in "docker-flags-212000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-212000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:08:41.181780   24203 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:08:41.182063   24203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:08:41.182069   24203 out.go:304] Setting ErrFile to fd 2...
	I0729 12:08:41.182072   24203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:08:41.182246   24203 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 12:08:41.183730   24203 out.go:298] Setting JSON to false
	I0729 12:08:41.206408   24203 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":11291,"bootTime":1722268830,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 12:08:41.206502   24203 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 12:08:41.228356   24203 out.go:177] * [docker-flags-212000] minikube v1.33.1 on Darwin 14.5
	I0729 12:08:41.271111   24203 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 12:08:41.271185   24203 notify.go:220] Checking for updates...
	I0729 12:08:41.313564   24203 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	I0729 12:08:41.335008   24203 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 12:08:41.355890   24203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:08:41.376782   24203 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	I0729 12:08:41.397818   24203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:08:41.419806   24203 config.go:182] Loaded profile config "force-systemd-flag-463000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 12:08:41.419969   24203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:08:41.444672   24203 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 12:08:41.444839   24203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 12:08:41.524163   24203 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:113 OomKillDisable:false NGoroutines:230 SystemTime:2024-07-29 19:08:41.514956074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 12:08:41.566844   24203 out.go:177] * Using the docker driver based on user configuration
	I0729 12:08:41.587881   24203 start.go:297] selected driver: docker
	I0729 12:08:41.587911   24203 start.go:901] validating driver "docker" against <nil>
	I0729 12:08:41.587926   24203 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:08:41.592284   24203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 12:08:41.668697   24203 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:113 OomKillDisable:false NGoroutines:230 SystemTime:2024-07-29 19:08:41.660035044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 12:08:41.668869   24203 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 12:08:41.669069   24203 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0729 12:08:41.690752   24203 out.go:177] * Using Docker Desktop driver with root privileges
	I0729 12:08:41.711912   24203 cni.go:84] Creating CNI manager for ""
	I0729 12:08:41.711952   24203 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 12:08:41.711965   24203 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 12:08:41.712091   24203 start.go:340] cluster config:
	{Name:docker-flags-212000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-212000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:08:41.733768   24203 out.go:177] * Starting "docker-flags-212000" primary control-plane node in "docker-flags-212000" cluster
	I0729 12:08:41.775764   24203 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 12:08:41.797896   24203 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 12:08:41.839811   24203 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 12:08:41.839892   24203 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 12:08:41.839913   24203 cache.go:56] Caching tarball of preloaded images
	I0729 12:08:41.839913   24203 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 12:08:41.840149   24203 preload.go:172] Found /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 12:08:41.840168   24203 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 12:08:41.840323   24203 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/docker-flags-212000/config.json ...
	I0729 12:08:41.840409   24203 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/docker-flags-212000/config.json: {Name:mk4731655d7733c1f3c99421be6e5c626940adaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0729 12:08:41.866008   24203 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 12:08:41.866024   24203 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 12:08:41.866166   24203 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 12:08:41.866198   24203 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 12:08:41.866213   24203 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 12:08:41.866224   24203 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 12:08:41.866228   24203 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 12:08:41.869428   24203 image.go:273] response: 
	I0729 12:08:41.996578   24203 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 12:08:41.996624   24203 cache.go:194] Successfully downloaded all kic artifacts
	I0729 12:08:41.996686   24203 start.go:360] acquireMachinesLock for docker-flags-212000: {Name:mkdd413e1169b8734f3cc1b9e195daa1880458a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:08:41.997487   24203 start.go:364] duration metric: took 788.58µs to acquireMachinesLock for "docker-flags-212000"
	I0729 12:08:41.997519   24203 start.go:93] Provisioning new machine with config: &{Name:docker-flags-212000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-212000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 12:08:41.997580   24203 start.go:125] createHost starting for "" (driver="docker")
	I0729 12:08:42.039789   24203 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 12:08:42.039991   24203 start.go:159] libmachine.API.Create for "docker-flags-212000" (driver="docker")
	I0729 12:08:42.040029   24203 client.go:168] LocalClient.Create starting
	I0729 12:08:42.040117   24203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 12:08:42.040171   24203 main.go:141] libmachine: Decoding PEM data...
	I0729 12:08:42.040186   24203 main.go:141] libmachine: Parsing certificate...
	I0729 12:08:42.040243   24203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 12:08:42.040281   24203 main.go:141] libmachine: Decoding PEM data...
	I0729 12:08:42.040289   24203 main.go:141] libmachine: Parsing certificate...
	I0729 12:08:42.040778   24203 cli_runner.go:164] Run: docker network inspect docker-flags-212000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 12:08:42.057993   24203 cli_runner.go:211] docker network inspect docker-flags-212000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 12:08:42.058095   24203 network_create.go:284] running [docker network inspect docker-flags-212000] to gather additional debugging logs...
	I0729 12:08:42.058112   24203 cli_runner.go:164] Run: docker network inspect docker-flags-212000
	W0729 12:08:42.075216   24203 cli_runner.go:211] docker network inspect docker-flags-212000 returned with exit code 1
	I0729 12:08:42.075248   24203 network_create.go:287] error running [docker network inspect docker-flags-212000]: docker network inspect docker-flags-212000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-212000 not found
	I0729 12:08:42.075260   24203 network_create.go:289] output of [docker network inspect docker-flags-212000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-212000 not found
	
	** /stderr **
	I0729 12:08:42.075407   24203 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 12:08:42.094561   24203 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:08:42.096002   24203 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:08:42.097335   24203 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:08:42.097669   24203 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001692cc0}
	I0729 12:08:42.097684   24203 network_create.go:124] attempt to create docker network docker-flags-212000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0729 12:08:42.097768   24203 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212000 docker-flags-212000
	W0729 12:08:42.115224   24203 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212000 docker-flags-212000 returned with exit code 1
	W0729 12:08:42.115260   24203 network_create.go:149] failed to create docker network docker-flags-212000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212000 docker-flags-212000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0729 12:08:42.115278   24203 network_create.go:116] failed to create docker network docker-flags-212000 192.168.76.0/24, will retry: subnet is taken
	I0729 12:08:42.116878   24203 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:08:42.117241   24203 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013dd8e0}
	I0729 12:08:42.117253   24203 network_create.go:124] attempt to create docker network docker-flags-212000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0729 12:08:42.117327   24203 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212000 docker-flags-212000
	I0729 12:08:42.181292   24203 network_create.go:108] docker network docker-flags-212000 192.168.85.0/24 created
	I0729 12:08:42.181337   24203 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-212000" container
	I0729 12:08:42.181482   24203 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 12:08:42.201763   24203 cli_runner.go:164] Run: docker volume create docker-flags-212000 --label name.minikube.sigs.k8s.io=docker-flags-212000 --label created_by.minikube.sigs.k8s.io=true
	I0729 12:08:42.220757   24203 oci.go:103] Successfully created a docker volume docker-flags-212000
	I0729 12:08:42.220875   24203 cli_runner.go:164] Run: docker run --rm --name docker-flags-212000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212000 --entrypoint /usr/bin/test -v docker-flags-212000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 12:08:42.674793   24203 oci.go:107] Successfully prepared a docker volume docker-flags-212000
	I0729 12:08:42.674841   24203 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 12:08:42.674866   24203 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 12:08:42.674988   24203 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-212000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 12:14:42.041285   24203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:14:42.041432   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:14:42.061309   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:14:42.061436   24203 retry.go:31] will retry after 178.07601ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:42.240813   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:14:42.261143   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:14:42.261235   24203 retry.go:31] will retry after 263.920558ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:42.525974   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:14:42.545389   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:14:42.545500   24203 retry.go:31] will retry after 552.384665ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:43.098376   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:14:43.118544   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:14:43.118634   24203 retry.go:31] will retry after 678.335186ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:43.799423   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:14:43.820205   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	W0729 12:14:43.820300   24203 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	
	W0729 12:14:43.820329   24203 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:43.820402   24203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 12:14:43.820459   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:14:43.837403   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:14:43.837492   24203 retry.go:31] will retry after 338.721595ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:44.176732   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:14:44.196996   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:14:44.197099   24203 retry.go:31] will retry after 358.084247ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:44.557572   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:14:44.577742   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:14:44.577834   24203 retry.go:31] will retry after 380.312741ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:44.960602   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:14:44.980615   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:14:44.980707   24203 retry.go:31] will retry after 689.318501ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:45.672436   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:14:45.691797   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	W0729 12:14:45.691894   24203 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	
	W0729 12:14:45.691916   24203 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:45.691935   24203 start.go:128] duration metric: took 6m3.694700847s to createHost
	I0729 12:14:45.691941   24203 start.go:83] releasing machines lock for "docker-flags-212000", held for 6m3.694805977s
	W0729 12:14:45.691955   24203 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0729 12:14:45.692407   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:14:45.756593   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:45.756653   24203 delete.go:82] Unable to get host status for docker-flags-212000, assuming it has already been deleted: state: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	W0729 12:14:45.756730   24203 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0729 12:14:45.756742   24203 start.go:729] Will try again in 5 seconds ...
	I0729 12:14:50.758220   24203 start.go:360] acquireMachinesLock for docker-flags-212000: {Name:mkdd413e1169b8734f3cc1b9e195daa1880458a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:14:50.758417   24203 start.go:364] duration metric: took 154.405µs to acquireMachinesLock for "docker-flags-212000"
	I0729 12:14:50.758454   24203 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:14:50.758470   24203 fix.go:54] fixHost starting: 
	I0729 12:14:50.758883   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:14:50.778890   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:50.778947   24203 fix.go:112] recreateIfNeeded on docker-flags-212000: state= err=unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:50.778979   24203 fix.go:117] machineExists: false. err=machine does not exist
	I0729 12:14:50.800509   24203 out.go:177] * docker "docker-flags-212000" container is missing, will recreate.
	I0729 12:14:50.842348   24203 delete.go:124] DEMOLISHING docker-flags-212000 ...
	I0729 12:14:50.842544   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:14:50.861255   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	W0729 12:14:50.861316   24203 stop.go:83] unable to get state: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:50.861342   24203 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:50.861758   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:14:50.878834   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:50.878901   24203 delete.go:82] Unable to get host status for docker-flags-212000, assuming it has already been deleted: state: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:50.878993   24203 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-212000
	W0729 12:14:50.895831   24203 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-212000 returned with exit code 1
	I0729 12:14:50.895865   24203 kic.go:371] could not find the container docker-flags-212000 to remove it. will try anyways
	I0729 12:14:50.895948   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:14:50.912874   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	W0729 12:14:50.912936   24203 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:50.913015   24203 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-212000 /bin/bash -c "sudo init 0"
	W0729 12:14:50.930208   24203 cli_runner.go:211] docker exec --privileged -t docker-flags-212000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 12:14:50.930238   24203 oci.go:650] error shutdown docker-flags-212000: docker exec --privileged -t docker-flags-212000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:51.930542   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:14:51.949057   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:51.949104   24203 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:51.949115   24203 oci.go:664] temporary error: container docker-flags-212000 status is  but expect it to be exited
	I0729 12:14:51.949140   24203 retry.go:31] will retry after 284.344296ms: couldn't verify container is exited. %v: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:52.234081   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:14:52.253590   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:52.253644   24203 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:52.253653   24203 oci.go:664] temporary error: container docker-flags-212000 status is  but expect it to be exited
	I0729 12:14:52.253674   24203 retry.go:31] will retry after 576.932448ms: couldn't verify container is exited. %v: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:52.830838   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:14:52.850646   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:52.850691   24203 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:52.850706   24203 oci.go:664] temporary error: container docker-flags-212000 status is  but expect it to be exited
	I0729 12:14:52.850735   24203 retry.go:31] will retry after 1.047785179s: couldn't verify container is exited. %v: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:53.899660   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:14:53.919328   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:53.919375   24203 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:53.919400   24203 oci.go:664] temporary error: container docker-flags-212000 status is  but expect it to be exited
	I0729 12:14:53.919433   24203 retry.go:31] will retry after 925.153636ms: couldn't verify container is exited. %v: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:54.846025   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:14:54.865439   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:54.865484   24203 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:54.865494   24203 oci.go:664] temporary error: container docker-flags-212000 status is  but expect it to be exited
	I0729 12:14:54.865521   24203 retry.go:31] will retry after 3.270180096s: couldn't verify container is exited. %v: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:58.138069   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:14:58.157790   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:58.157856   24203 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:14:58.157867   24203 oci.go:664] temporary error: container docker-flags-212000 status is  but expect it to be exited
	I0729 12:14:58.157894   24203 retry.go:31] will retry after 2.714538646s: couldn't verify container is exited. %v: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:15:00.874856   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:15:00.894271   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	I0729 12:15:00.894318   24203 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:15:00.894328   24203 oci.go:664] temporary error: container docker-flags-212000 status is  but expect it to be exited
	I0729 12:15:00.894354   24203 retry.go:31] will retry after 3.277456972s: couldn't verify container is exited. %v: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:15:04.174132   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:15:04.194237   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	I0729 12:15:04.194286   24203 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:15:04.194297   24203 oci.go:664] temporary error: container docker-flags-212000 status is  but expect it to be exited
	I0729 12:15:04.194323   24203 retry.go:31] will retry after 6.293330041s: couldn't verify container is exited. %v: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:15:10.489997   24203 cli_runner.go:164] Run: docker container inspect docker-flags-212000 --format={{.State.Status}}
	W0729 12:15:10.508673   24203 cli_runner.go:211] docker container inspect docker-flags-212000 --format={{.State.Status}} returned with exit code 1
	I0729 12:15:10.508718   24203 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:15:10.508730   24203 oci.go:664] temporary error: container docker-flags-212000 status is  but expect it to be exited
	I0729 12:15:10.508759   24203 oci.go:88] couldn't shut down docker-flags-212000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	 
	I0729 12:15:10.508842   24203 cli_runner.go:164] Run: docker rm -f -v docker-flags-212000
	I0729 12:15:10.526505   24203 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-212000
	W0729 12:15:10.543548   24203 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-212000 returned with exit code 1
	I0729 12:15:10.543667   24203 cli_runner.go:164] Run: docker network inspect docker-flags-212000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 12:15:10.561048   24203 cli_runner.go:164] Run: docker network rm docker-flags-212000
	I0729 12:15:10.638356   24203 fix.go:124] Sleeping 1 second for extra luck!
	I0729 12:15:11.640495   24203 start.go:125] createHost starting for "" (driver="docker")
	I0729 12:15:11.662543   24203 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 12:15:11.662735   24203 start.go:159] libmachine.API.Create for "docker-flags-212000" (driver="docker")
	I0729 12:15:11.662770   24203 client.go:168] LocalClient.Create starting
	I0729 12:15:11.662988   24203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 12:15:11.663097   24203 main.go:141] libmachine: Decoding PEM data...
	I0729 12:15:11.663124   24203 main.go:141] libmachine: Parsing certificate...
	I0729 12:15:11.663206   24203 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 12:15:11.663284   24203 main.go:141] libmachine: Decoding PEM data...
	I0729 12:15:11.663299   24203 main.go:141] libmachine: Parsing certificate...
	I0729 12:15:11.683656   24203 cli_runner.go:164] Run: docker network inspect docker-flags-212000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 12:15:11.703315   24203 cli_runner.go:211] docker network inspect docker-flags-212000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 12:15:11.703431   24203 network_create.go:284] running [docker network inspect docker-flags-212000] to gather additional debugging logs...
	I0729 12:15:11.703451   24203 cli_runner.go:164] Run: docker network inspect docker-flags-212000
	W0729 12:15:11.721027   24203 cli_runner.go:211] docker network inspect docker-flags-212000 returned with exit code 1
	I0729 12:15:11.721051   24203 network_create.go:287] error running [docker network inspect docker-flags-212000]: docker network inspect docker-flags-212000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-212000 not found
	I0729 12:15:11.721063   24203 network_create.go:289] output of [docker network inspect docker-flags-212000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-212000 not found
	
	** /stderr **
	I0729 12:15:11.721236   24203 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 12:15:11.740413   24203 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:15:11.742034   24203 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:15:11.743627   24203 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:15:11.745155   24203 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:15:11.746678   24203 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:15:11.748377   24203 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:15:11.748883   24203 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015cd160}
	I0729 12:15:11.748904   24203 network_create.go:124] attempt to create docker network docker-flags-212000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0729 12:15:11.749016   24203 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-212000 docker-flags-212000
	I0729 12:15:11.812389   24203 network_create.go:108] docker network docker-flags-212000 192.168.103.0/24 created
	I0729 12:15:11.812424   24203 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-212000" container
	I0729 12:15:11.812535   24203 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 12:15:11.832477   24203 cli_runner.go:164] Run: docker volume create docker-flags-212000 --label name.minikube.sigs.k8s.io=docker-flags-212000 --label created_by.minikube.sigs.k8s.io=true
	I0729 12:15:11.849693   24203 oci.go:103] Successfully created a docker volume docker-flags-212000
	I0729 12:15:11.849827   24203 cli_runner.go:164] Run: docker run --rm --name docker-flags-212000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-212000 --entrypoint /usr/bin/test -v docker-flags-212000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 12:15:12.100686   24203 oci.go:107] Successfully prepared a docker volume docker-flags-212000
	I0729 12:15:12.100726   24203 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 12:15:12.100738   24203 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 12:15:12.100854   24203 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-212000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 12:21:11.688828   24203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:21:11.688938   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:11.708147   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:11.708257   24203 retry.go:31] will retry after 173.890484ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:11.883845   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:11.903574   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:11.903670   24203 retry.go:31] will retry after 208.090146ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:12.112589   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:12.132353   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:12.132474   24203 retry.go:31] will retry after 361.876803ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:12.496733   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:12.516684   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:12.516779   24203 retry.go:31] will retry after 580.326473ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:13.099552   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:13.119846   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	W0729 12:21:13.119952   24203 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	
	W0729 12:21:13.119976   24203 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:13.120041   24203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 12:21:13.120099   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:13.138022   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:13.138124   24203 retry.go:31] will retry after 205.901505ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:13.346316   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:13.365702   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:13.365798   24203 retry.go:31] will retry after 413.094911ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:13.781336   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:13.801210   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:13.801322   24203 retry.go:31] will retry after 376.892384ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:14.178545   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:14.198869   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:14.198969   24203 retry.go:31] will retry after 436.10673ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:14.635623   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:14.655993   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	W0729 12:21:14.656100   24203 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	
	W0729 12:21:14.656120   24203 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:14.656132   24203 start.go:128] duration metric: took 6m2.989798675s to createHost
	I0729 12:21:14.656209   24203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:21:14.656270   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:14.673185   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:14.673280   24203 retry.go:31] will retry after 158.723384ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:14.834382   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:14.852869   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:14.852970   24203 retry.go:31] will retry after 235.943395ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:15.090337   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:15.110889   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:15.110982   24203 retry.go:31] will retry after 487.864181ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:15.601249   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:15.621824   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:15.621920   24203 retry.go:31] will retry after 636.048907ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:16.258311   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:16.277960   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	W0729 12:21:16.278060   24203 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	
	W0729 12:21:16.278080   24203 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:16.278150   24203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 12:21:16.278209   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:16.296035   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:16.296128   24203 retry.go:31] will retry after 189.155074ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:16.486152   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:16.506684   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:16.506789   24203 retry.go:31] will retry after 406.287728ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:16.913704   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:16.933727   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:16.933835   24203 retry.go:31] will retry after 431.613566ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:17.366992   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:17.386986   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	I0729 12:21:17.387074   24203 retry.go:31] will retry after 629.019163ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:18.016899   24203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000
	W0729 12:21:18.036575   24203 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000 returned with exit code 1
	W0729 12:21:18.036677   24203 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	
	W0729 12:21:18.036696   24203 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-212000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-212000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	I0729 12:21:18.036704   24203 fix.go:56] duration metric: took 6m27.252466026s for fixHost
	I0729 12:21:18.036711   24203 start.go:83] releasing machines lock for "docker-flags-212000", held for 6m27.252512527s
	W0729 12:21:18.036787   24203 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-212000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-212000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 12:21:18.079496   24203 out.go:177] 
	W0729 12:21:18.101462   24203 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 12:21:18.101511   24203 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0729 12:21:18.101592   24203 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0729 12:21:18.144426   24203 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-212000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-212000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-212000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (165.115751ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-212000 host status: state: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-212000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-212000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-212000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (161.467488ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-212000 host status: state: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-212000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-212000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-29 12:21:18.529967 -0700 PDT m=+6943.624666146
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-212000
helpers_test.go:235: (dbg) docker inspect docker-flags-212000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-212000",
	        "Id": "c79d378b05c4ae93387e0a0d33fb436d552ae87e0b7c78a6999632c2eff1b4f1",
	        "Created": "2024-07-29T19:15:11.765103086Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-212000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-212000 -n docker-flags-212000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-212000 -n docker-flags-212000: exit status 7 (74.563889ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:21:18.623681   24503 status.go:249] status error: host: state: unknown state "docker-flags-212000": docker container inspect docker-flags-212000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-212000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-212000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-212000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-212000
--- FAIL: TestDockerFlags (757.90s)

                                                
                                    
x
+
TestForceSystemdFlag (749.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-463000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-463000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m28.972995523s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-463000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-463000" primary control-plane node in "force-systemd-flag-463000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-463000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 12:07:58.064517   24127 out.go:291] Setting OutFile to fd 1 ...
	I0729 12:07:58.064707   24127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:07:58.064712   24127 out.go:304] Setting ErrFile to fd 2...
	I0729 12:07:58.064716   24127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 12:07:58.064896   24127 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 12:07:58.066436   24127 out.go:298] Setting JSON to false
	I0729 12:07:58.090157   24127 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":11248,"bootTime":1722268830,"procs":495,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 12:07:58.090255   24127 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 12:07:58.112112   24127 out.go:177] * [force-systemd-flag-463000] minikube v1.33.1 on Darwin 14.5
	I0729 12:07:58.153759   24127 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 12:07:58.153932   24127 notify.go:220] Checking for updates...
	I0729 12:07:58.196678   24127 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	I0729 12:07:58.217770   24127 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 12:07:58.238845   24127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 12:07:58.259810   24127 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	I0729 12:07:58.280933   24127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 12:07:58.302311   24127 config.go:182] Loaded profile config "force-systemd-env-292000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 12:07:58.302439   24127 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 12:07:58.325641   24127 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 12:07:58.325816   24127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 12:07:58.404174   24127 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:109 OomKillDisable:false NGoroutines:218 SystemTime:2024-07-29 19:07:58.39511153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.1
3-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker
-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-p
lugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 12:07:58.425790   24127 out.go:177] * Using the docker driver based on user configuration
	I0729 12:07:58.446798   24127 start.go:297] selected driver: docker
	I0729 12:07:58.446826   24127 start.go:901] validating driver "docker" against <nil>
	I0729 12:07:58.446841   24127 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 12:07:58.451717   24127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 12:07:58.529455   24127 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:109 OomKillDisable:false NGoroutines:218 SystemTime:2024-07-29 19:07:58.520704362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 12:07:58.529623   24127 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 12:07:58.529815   24127 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 12:07:58.550859   24127 out.go:177] * Using Docker Desktop driver with root privileges
	I0729 12:07:58.572891   24127 cni.go:84] Creating CNI manager for ""
	I0729 12:07:58.572933   24127 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 12:07:58.572948   24127 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 12:07:58.573067   24127 start.go:340] cluster config:
	{Name:force-systemd-flag-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 12:07:58.594816   24127 out.go:177] * Starting "force-systemd-flag-463000" primary control-plane node in "force-systemd-flag-463000" cluster
	I0729 12:07:58.636801   24127 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 12:07:58.657777   24127 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 12:07:58.699790   24127 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 12:07:58.699836   24127 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 12:07:58.699871   24127 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 12:07:58.699890   24127 cache.go:56] Caching tarball of preloaded images
	I0729 12:07:58.700131   24127 preload.go:172] Found /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 12:07:58.700151   24127 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 12:07:58.700302   24127 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/force-systemd-flag-463000/config.json ...
	I0729 12:07:58.700923   24127 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/force-systemd-flag-463000/config.json: {Name:mk833df5b1ecbfad06c5c362cdfbee38734a45a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0729 12:07:58.725545   24127 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 12:07:58.725560   24127 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 12:07:58.725696   24127 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 12:07:58.725716   24127 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 12:07:58.725723   24127 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 12:07:58.725731   24127 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 12:07:58.725736   24127 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 12:07:58.728909   24127 image.go:273] response: 
	I0729 12:07:58.874179   24127 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 12:07:58.874232   24127 cache.go:194] Successfully downloaded all kic artifacts
	I0729 12:07:58.874283   24127 start.go:360] acquireMachinesLock for force-systemd-flag-463000: {Name:mkcfa208b1b162094c4791c2dfbb3d0ad1fdeea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:07:58.874989   24127 start.go:364] duration metric: took 692.908µs to acquireMachinesLock for "force-systemd-flag-463000"
	I0729 12:07:58.875039   24127 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-463000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 12:07:58.875105   24127 start.go:125] createHost starting for "" (driver="docker")
	I0729 12:07:58.917149   24127 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 12:07:58.917341   24127 start.go:159] libmachine.API.Create for "force-systemd-flag-463000" (driver="docker")
	I0729 12:07:58.917367   24127 client.go:168] LocalClient.Create starting
	I0729 12:07:58.917470   24127 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 12:07:58.917522   24127 main.go:141] libmachine: Decoding PEM data...
	I0729 12:07:58.917538   24127 main.go:141] libmachine: Parsing certificate...
	I0729 12:07:58.917585   24127 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 12:07:58.917626   24127 main.go:141] libmachine: Decoding PEM data...
	I0729 12:07:58.917634   24127 main.go:141] libmachine: Parsing certificate...
	I0729 12:07:58.918145   24127 cli_runner.go:164] Run: docker network inspect force-systemd-flag-463000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 12:07:58.935451   24127 cli_runner.go:211] docker network inspect force-systemd-flag-463000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 12:07:58.935573   24127 network_create.go:284] running [docker network inspect force-systemd-flag-463000] to gather additional debugging logs...
	I0729 12:07:58.935589   24127 cli_runner.go:164] Run: docker network inspect force-systemd-flag-463000
	W0729 12:07:58.953389   24127 cli_runner.go:211] docker network inspect force-systemd-flag-463000 returned with exit code 1
	I0729 12:07:58.953427   24127 network_create.go:287] error running [docker network inspect force-systemd-flag-463000]: docker network inspect force-systemd-flag-463000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-463000 not found
	I0729 12:07:58.953444   24127 network_create.go:289] output of [docker network inspect force-systemd-flag-463000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-463000 not found
	
	** /stderr **
	I0729 12:07:58.953572   24127 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 12:07:58.972501   24127 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:07:58.974120   24127 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:07:58.974464   24127 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013d11d0}
	I0729 12:07:58.974480   24127 network_create.go:124] attempt to create docker network force-systemd-flag-463000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0729 12:07:58.974554   24127 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-463000 force-systemd-flag-463000
	I0729 12:07:59.037410   24127 network_create.go:108] docker network force-systemd-flag-463000 192.168.67.0/24 created
	I0729 12:07:59.037456   24127 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-463000" container
	I0729 12:07:59.037578   24127 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 12:07:59.056843   24127 cli_runner.go:164] Run: docker volume create force-systemd-flag-463000 --label name.minikube.sigs.k8s.io=force-systemd-flag-463000 --label created_by.minikube.sigs.k8s.io=true
	I0729 12:07:59.074910   24127 oci.go:103] Successfully created a docker volume force-systemd-flag-463000
	I0729 12:07:59.075055   24127 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-463000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-463000 --entrypoint /usr/bin/test -v force-systemd-flag-463000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 12:07:59.514103   24127 oci.go:107] Successfully prepared a docker volume force-systemd-flag-463000
	I0729 12:07:59.514144   24127 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 12:07:59.514164   24127 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 12:07:59.514315   24127 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-463000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 12:13:58.919423   24127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:13:58.919565   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:13:58.938851   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:13:58.938974   24127 retry.go:31] will retry after 271.014832ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:13:59.212358   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:13:59.232259   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:13:59.232352   24127 retry.go:31] will retry after 269.152154ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:13:59.503379   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:13:59.523448   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:13:59.523562   24127 retry.go:31] will retry after 794.441592ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:00.320449   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:14:00.339832   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	W0729 12:14:00.339943   24127 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	
	W0729 12:14:00.339965   24127 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:00.340032   24127 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 12:14:00.340107   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:14:00.358126   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:14:00.358213   24127 retry.go:31] will retry after 310.590933ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:00.671218   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:14:00.690837   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:14:00.690935   24127 retry.go:31] will retry after 223.278078ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:00.915530   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:14:00.934689   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:14:00.934776   24127 retry.go:31] will retry after 335.846826ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:01.271410   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:14:01.291030   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	W0729 12:14:01.291142   24127 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	
	W0729 12:14:01.291156   24127 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:01.291173   24127 start.go:128] duration metric: took 6m2.41641441s to createHost
	I0729 12:14:01.291181   24127 start.go:83] releasing machines lock for "force-systemd-flag-463000", held for 6m2.416541138s
	W0729 12:14:01.291196   24127 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0729 12:14:01.291654   24127 cli_runner.go:164] Run: docker container inspect force-systemd-flag-463000 --format={{.State.Status}}
	W0729 12:14:01.309472   24127 cli_runner.go:211] docker container inspect force-systemd-flag-463000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:01.309525   24127 delete.go:82] Unable to get host status for force-systemd-flag-463000, assuming it has already been deleted: state: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	W0729 12:14:01.309601   24127 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0729 12:14:01.309613   24127 start.go:729] Will try again in 5 seconds ...
	I0729 12:14:06.311247   24127 start.go:360] acquireMachinesLock for force-systemd-flag-463000: {Name:mkcfa208b1b162094c4791c2dfbb3d0ad1fdeea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:14:06.312396   24127 start.go:364] duration metric: took 351.231µs to acquireMachinesLock for "force-systemd-flag-463000"
	I0729 12:14:06.312442   24127 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:14:06.312457   24127 fix.go:54] fixHost starting: 
	I0729 12:14:06.312922   24127 cli_runner.go:164] Run: docker container inspect force-systemd-flag-463000 --format={{.State.Status}}
	W0729 12:14:06.331917   24127 cli_runner.go:211] docker container inspect force-systemd-flag-463000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:06.331966   24127 fix.go:112] recreateIfNeeded on force-systemd-flag-463000: state= err=unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:06.331985   24127 fix.go:117] machineExists: false. err=machine does not exist
	I0729 12:14:06.353926   24127 out.go:177] * docker "force-systemd-flag-463000" container is missing, will recreate.
	I0729 12:14:06.375356   24127 delete.go:124] DEMOLISHING force-systemd-flag-463000 ...
	I0729 12:14:06.375573   24127 cli_runner.go:164] Run: docker container inspect force-systemd-flag-463000 --format={{.State.Status}}
	W0729 12:14:06.394088   24127 cli_runner.go:211] docker container inspect force-systemd-flag-463000 --format={{.State.Status}} returned with exit code 1
	W0729 12:14:06.394148   24127 stop.go:83] unable to get state: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:06.394173   24127 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:06.394538   24127 cli_runner.go:164] Run: docker container inspect force-systemd-flag-463000 --format={{.State.Status}}
	W0729 12:14:06.411443   24127 cli_runner.go:211] docker container inspect force-systemd-flag-463000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:06.411498   24127 delete.go:82] Unable to get host status for force-systemd-flag-463000, assuming it has already been deleted: state: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:06.411582   24127 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-463000
	W0729 12:14:06.428566   24127 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-463000 returned with exit code 1
	I0729 12:14:06.428616   24127 kic.go:371] could not find the container force-systemd-flag-463000 to remove it. will try anyways
	I0729 12:14:06.428696   24127 cli_runner.go:164] Run: docker container inspect force-systemd-flag-463000 --format={{.State.Status}}
	W0729 12:14:06.445501   24127 cli_runner.go:211] docker container inspect force-systemd-flag-463000 --format={{.State.Status}} returned with exit code 1
	W0729 12:14:06.445563   24127 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:06.445663   24127 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-463000 /bin/bash -c "sudo init 0"
	W0729 12:14:06.462442   24127 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-463000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 12:14:06.462479   24127 oci.go:650] error shutdown force-systemd-flag-463000: docker exec --privileged -t force-systemd-flag-463000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:07.463881   24127 cli_runner.go:164] Run: docker container inspect force-systemd-flag-463000 --format={{.State.Status}}
	W0729 12:14:07.483423   24127 cli_runner.go:211] docker container inspect force-systemd-flag-463000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:07.483484   24127 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:07.483493   24127 oci.go:664] temporary error: container force-systemd-flag-463000 status is  but expect it to be exited
	I0729 12:14:07.483516   24127 retry.go:31] will retry after 607.478082ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:08.093368   24127 cli_runner.go:164] Run: docker container inspect force-systemd-flag-463000 --format={{.State.Status}}
	W0729 12:14:08.112756   24127 cli_runner.go:211] docker container inspect force-systemd-flag-463000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:08.112808   24127 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:08.112824   24127 oci.go:664] temporary error: container force-systemd-flag-463000 status is  but expect it to be exited
	I0729 12:14:08.112852   24127 retry.go:31] will retry after 574.645291ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:08.688021   24127 cli_runner.go:164] Run: docker container inspect force-systemd-flag-463000 --format={{.State.Status}}
	W0729 12:14:08.708432   24127 cli_runner.go:211] docker container inspect force-systemd-flag-463000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:08.708491   24127 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:08.708501   24127 oci.go:664] temporary error: container force-systemd-flag-463000 status is  but expect it to be exited
	I0729 12:14:08.708525   24127 retry.go:31] will retry after 952.054166ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:09.661951   24127 cli_runner.go:164] Run: docker container inspect force-systemd-flag-463000 --format={{.State.Status}}
	W0729 12:14:09.681496   24127 cli_runner.go:211] docker container inspect force-systemd-flag-463000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:09.681552   24127 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:09.681561   24127 oci.go:664] temporary error: container force-systemd-flag-463000 status is  but expect it to be exited
	I0729 12:14:09.681583   24127 retry.go:31] will retry after 2.504667914s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:12.188703   24127 cli_runner.go:164] Run: docker container inspect force-systemd-flag-463000 --format={{.State.Status}}
	W0729 12:14:12.209027   24127 cli_runner.go:211] docker container inspect force-systemd-flag-463000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:12.209078   24127 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:12.209088   24127 oci.go:664] temporary error: container force-systemd-flag-463000 status is  but expect it to be exited
	I0729 12:14:12.209114   24127 retry.go:31] will retry after 3.175040864s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:15.384445   24127 cli_runner.go:164] Run: docker container inspect force-systemd-flag-463000 --format={{.State.Status}}
	W0729 12:14:15.403767   24127 cli_runner.go:211] docker container inspect force-systemd-flag-463000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:15.403815   24127 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:15.403829   24127 oci.go:664] temporary error: container force-systemd-flag-463000 status is  but expect it to be exited
	I0729 12:14:15.403855   24127 retry.go:31] will retry after 3.792470494s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:19.197772   24127 cli_runner.go:164] Run: docker container inspect force-systemd-flag-463000 --format={{.State.Status}}
	W0729 12:14:19.217820   24127 cli_runner.go:211] docker container inspect force-systemd-flag-463000 --format={{.State.Status}} returned with exit code 1
	I0729 12:14:19.217876   24127 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:14:19.217887   24127 oci.go:664] temporary error: container force-systemd-flag-463000 status is  but expect it to be exited
	I0729 12:14:19.217918   24127 oci.go:88] couldn't shut down force-systemd-flag-463000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	 
	I0729 12:14:19.218020   24127 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-463000
	I0729 12:14:19.235985   24127 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-463000
	W0729 12:14:19.252781   24127 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-463000 returned with exit code 1
	I0729 12:14:19.252916   24127 cli_runner.go:164] Run: docker network inspect force-systemd-flag-463000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 12:14:19.270354   24127 cli_runner.go:164] Run: docker network rm force-systemd-flag-463000
	I0729 12:14:19.347438   24127 fix.go:124] Sleeping 1 second for extra luck!
	I0729 12:14:20.349617   24127 start.go:125] createHost starting for "" (driver="docker")
	I0729 12:14:20.372084   24127 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 12:14:20.372285   24127 start.go:159] libmachine.API.Create for "force-systemd-flag-463000" (driver="docker")
	I0729 12:14:20.372311   24127 client.go:168] LocalClient.Create starting
	I0729 12:14:20.372534   24127 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 12:14:20.372628   24127 main.go:141] libmachine: Decoding PEM data...
	I0729 12:14:20.372655   24127 main.go:141] libmachine: Parsing certificate...
	I0729 12:14:20.372739   24127 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 12:14:20.372819   24127 main.go:141] libmachine: Decoding PEM data...
	I0729 12:14:20.372834   24127 main.go:141] libmachine: Parsing certificate...
	I0729 12:14:20.373590   24127 cli_runner.go:164] Run: docker network inspect force-systemd-flag-463000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 12:14:20.392700   24127 cli_runner.go:211] docker network inspect force-systemd-flag-463000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 12:14:20.392796   24127 network_create.go:284] running [docker network inspect force-systemd-flag-463000] to gather additional debugging logs...
	I0729 12:14:20.392817   24127 cli_runner.go:164] Run: docker network inspect force-systemd-flag-463000
	W0729 12:14:20.410506   24127 cli_runner.go:211] docker network inspect force-systemd-flag-463000 returned with exit code 1
	I0729 12:14:20.410540   24127 network_create.go:287] error running [docker network inspect force-systemd-flag-463000]: docker network inspect force-systemd-flag-463000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-463000 not found
	I0729 12:14:20.410558   24127 network_create.go:289] output of [docker network inspect force-systemd-flag-463000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-463000 not found
	
	** /stderr **
	I0729 12:14:20.410706   24127 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 12:14:20.430136   24127 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:14:20.431756   24127 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:14:20.433244   24127 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:14:20.434948   24127 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:14:20.436632   24127 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:14:20.437103   24127 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001435b50}
	I0729 12:14:20.437122   24127 network_create.go:124] attempt to create docker network force-systemd-flag-463000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0729 12:14:20.437213   24127 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-463000 force-systemd-flag-463000
	I0729 12:14:20.501083   24127 network_create.go:108] docker network force-systemd-flag-463000 192.168.94.0/24 created
	I0729 12:14:20.501122   24127 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-463000" container
	I0729 12:14:20.501229   24127 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 12:14:20.520759   24127 cli_runner.go:164] Run: docker volume create force-systemd-flag-463000 --label name.minikube.sigs.k8s.io=force-systemd-flag-463000 --label created_by.minikube.sigs.k8s.io=true
	I0729 12:14:20.538039   24127 oci.go:103] Successfully created a docker volume force-systemd-flag-463000
	I0729 12:14:20.538168   24127 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-463000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-463000 --entrypoint /usr/bin/test -v force-systemd-flag-463000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 12:14:20.860486   24127 oci.go:107] Successfully prepared a docker volume force-systemd-flag-463000
	I0729 12:14:20.860517   24127 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 12:14:20.860531   24127 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 12:14:20.860647   24127 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-463000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 12:20:20.394887   24127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:20:20.395017   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:20.414379   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:20.414495   24127 retry.go:31] will retry after 163.210756ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:20.580001   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:20.600407   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:20.600531   24127 retry.go:31] will retry after 199.065762ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:20.801275   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:20.821168   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:20.821267   24127 retry.go:31] will retry after 572.534091ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:21.394350   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:21.414154   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:21.414269   24127 retry.go:31] will retry after 706.25753ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:22.122338   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:22.140763   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	W0729 12:20:22.140881   24127 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	
	W0729 12:20:22.140901   24127 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:22.140964   24127 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 12:20:22.141029   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:22.158119   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:22.158214   24127 retry.go:31] will retry after 318.245252ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:22.477551   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:22.497167   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:22.497267   24127 retry.go:31] will retry after 437.832299ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:22.937550   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:22.958019   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:22.958133   24127 retry.go:31] will retry after 677.268559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:23.638025   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:23.657819   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	W0729 12:20:23.657938   24127 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	
	W0729 12:20:23.657965   24127 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:23.657975   24127 start.go:128] duration metric: took 6m3.286364942s to createHost
	I0729 12:20:23.658041   24127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:20:23.658106   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:23.675975   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:23.676077   24127 retry.go:31] will retry after 155.012093ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:23.833531   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:23.852048   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:23.852140   24127 retry.go:31] will retry after 462.973429ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:24.315948   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:24.335426   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:24.335520   24127 retry.go:31] will retry after 467.728691ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:24.804999   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:24.825380   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:24.825476   24127 retry.go:31] will retry after 598.097489ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:25.425593   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:25.445373   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	W0729 12:20:25.445470   24127 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	
	W0729 12:20:25.445485   24127 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:25.445547   24127 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 12:20:25.445605   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:25.462515   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:25.462632   24127 retry.go:31] will retry after 369.328629ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:25.834382   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:25.853535   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:25.853645   24127 retry.go:31] will retry after 443.178083ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:26.298030   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:26.317935   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	I0729 12:20:26.318031   24127 retry.go:31] will retry after 534.237192ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:26.853717   24127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000
	W0729 12:20:26.873462   24127 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000 returned with exit code 1
	W0729 12:20:26.873568   24127 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	
	W0729 12:20:26.873584   24127 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-463000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-463000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	I0729 12:20:26.873595   24127 fix.go:56] duration metric: took 6m20.538502311s for fixHost
	I0729 12:20:26.873603   24127 start.go:83] releasing machines lock for "force-systemd-flag-463000", held for 6m20.538553173s
	W0729 12:20:26.873690   24127 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-463000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-463000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 12:20:26.917031   24127 out.go:177] 
	W0729 12:20:26.938356   24127 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 12:20:26.938411   24127 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0729 12:20:26.938440   24127 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0729 12:20:26.960381   24127 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-463000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-463000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-463000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (165.769938ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-463000 host status: state: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-463000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-29 12:20:27.180992 -0700 PDT m=+6892.278712817
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-463000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-463000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-463000",
	        "Id": "ee96e53b7310c124164c421090f7ae2590379ca8c5c228b438001eb212173b42",
	        "Created": "2024-07-29T19:14:20.452789435Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-463000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-463000 -n force-systemd-flag-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-463000 -n force-systemd-flag-463000: exit status 7 (72.499498ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:20:27.272905   24406 status.go:249] status error: host: state: unknown state "force-systemd-flag-463000": docker container inspect force-systemd-flag-463000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-463000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-463000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-463000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-463000
--- FAIL: TestForceSystemdFlag (749.68s)

                                                
                                    
x
+
TestForceSystemdEnv (756.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-292000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0729 11:57:59.006153   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 11:59:55.951230   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 12:00:30.996735   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 12:03:34.050057   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 12:04:56.019927   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 12:05:31.065953   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-292000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m35.786387474s)

                                                
                                                
-- stdout --
	* [force-systemd-env-292000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-292000" primary control-plane node in "force-systemd-env-292000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-292000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:56:04.582506   23580 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:56:04.582785   23580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:56:04.582791   23580 out.go:304] Setting ErrFile to fd 2...
	I0729 11:56:04.582794   23580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:56:04.582971   23580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:56:04.584425   23580 out.go:298] Setting JSON to false
	I0729 11:56:04.607027   23580 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10534,"bootTime":1722268830,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 11:56:04.607141   23580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:56:04.628742   23580 out.go:177] * [force-systemd-env-292000] minikube v1.33.1 on Darwin 14.5
	I0729 11:56:04.649205   23580 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 11:56:04.649262   23580 notify.go:220] Checking for updates...
	I0729 11:56:04.692404   23580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	I0729 11:56:04.713121   23580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 11:56:04.734384   23580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:56:04.755365   23580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	I0729 11:56:04.776194   23580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0729 11:56:04.798159   23580 config.go:182] Loaded profile config "offline-docker-206000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:56:04.798265   23580 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:56:04.821432   23580 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 11:56:04.821597   23580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:56:04.902412   23580 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:false NGoroutines:182 SystemTime:2024-07-29 18:56:04.893246409 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13
-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-
desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-pl
ugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 11:56:04.943285   23580 out.go:177] * Using the docker driver based on user configuration
	I0729 11:56:04.964410   23580 start.go:297] selected driver: docker
	I0729 11:56:04.964424   23580 start.go:901] validating driver "docker" against <nil>
	I0729 11:56:04.964436   23580 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:56:04.967879   23580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:56:05.046622   23580 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:97 OomKillDisable:false NGoroutines:182 SystemTime:2024-07-29 18:56:05.037398535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13
-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-
desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-pl
ugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 11:56:05.046798   23580 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:56:05.046980   23580 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 11:56:05.068522   23580 out.go:177] * Using Docker Desktop driver with root privileges
	I0729 11:56:05.091302   23580 cni.go:84] Creating CNI manager for ""
	I0729 11:56:05.091331   23580 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 11:56:05.091345   23580 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 11:56:05.091428   23580 start.go:340] cluster config:
	{Name:force-systemd-env-292000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-292000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:56:05.112441   23580 out.go:177] * Starting "force-systemd-env-292000" primary control-plane node in "force-systemd-env-292000" cluster
	I0729 11:56:05.154387   23580 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 11:56:05.175446   23580 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 11:56:05.217419   23580 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:56:05.217473   23580 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 11:56:05.217516   23580 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 11:56:05.217536   23580 cache.go:56] Caching tarball of preloaded images
	I0729 11:56:05.217770   23580 preload.go:172] Found /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 11:56:05.217790   23580 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 11:56:05.218802   23580 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/force-systemd-env-292000/config.json ...
	I0729 11:56:05.218970   23580 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/force-systemd-env-292000/config.json: {Name:mkeb1e42adf9938991be25851edc9dc0da623d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0729 11:56:05.243336   23580 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 11:56:05.243347   23580 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 11:56:05.243472   23580 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 11:56:05.243497   23580 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 11:56:05.243503   23580 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 11:56:05.243511   23580 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 11:56:05.243516   23580 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 11:56:05.246271   23580 image.go:273] response: 
	I0729 11:56:05.373597   23580 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 11:56:05.373650   23580 cache.go:194] Successfully downloaded all kic artifacts
	I0729 11:56:05.373708   23580 start.go:360] acquireMachinesLock for force-systemd-env-292000: {Name:mkce1deeb1c3480f7fd03327891d68f1e46690de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:56:05.373874   23580 start.go:364] duration metric: took 154.449µs to acquireMachinesLock for "force-systemd-env-292000"
	I0729 11:56:05.373902   23580 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-292000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-292000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:56:05.373956   23580 start.go:125] createHost starting for "" (driver="docker")
	I0729 11:56:05.416723   23580 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 11:56:05.416929   23580 start.go:159] libmachine.API.Create for "force-systemd-env-292000" (driver="docker")
	I0729 11:56:05.416951   23580 client.go:168] LocalClient.Create starting
	I0729 11:56:05.417061   23580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 11:56:05.417112   23580 main.go:141] libmachine: Decoding PEM data...
	I0729 11:56:05.417126   23580 main.go:141] libmachine: Parsing certificate...
	I0729 11:56:05.417184   23580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 11:56:05.417227   23580 main.go:141] libmachine: Decoding PEM data...
	I0729 11:56:05.417235   23580 main.go:141] libmachine: Parsing certificate...
	I0729 11:56:05.417750   23580 cli_runner.go:164] Run: docker network inspect force-systemd-env-292000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 11:56:05.435269   23580 cli_runner.go:211] docker network inspect force-systemd-env-292000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 11:56:05.435381   23580 network_create.go:284] running [docker network inspect force-systemd-env-292000] to gather additional debugging logs...
	I0729 11:56:05.435400   23580 cli_runner.go:164] Run: docker network inspect force-systemd-env-292000
	W0729 11:56:05.452618   23580 cli_runner.go:211] docker network inspect force-systemd-env-292000 returned with exit code 1
	I0729 11:56:05.452651   23580 network_create.go:287] error running [docker network inspect force-systemd-env-292000]: docker network inspect force-systemd-env-292000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-292000 not found
	I0729 11:56:05.452664   23580 network_create.go:289] output of [docker network inspect force-systemd-env-292000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-292000 not found
	
	** /stderr **
	I0729 11:56:05.452787   23580 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:56:05.471774   23580 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:56:05.473235   23580 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:56:05.474582   23580 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:56:05.474927   23580 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015b4950}
	I0729 11:56:05.474943   23580 network_create.go:124] attempt to create docker network force-systemd-env-292000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0729 11:56:05.475013   23580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-292000 force-systemd-env-292000
	W0729 11:56:05.492509   23580 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-292000 force-systemd-env-292000 returned with exit code 1
	W0729 11:56:05.492548   23580 network_create.go:149] failed to create docker network force-systemd-env-292000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-292000 force-systemd-env-292000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0729 11:56:05.492568   23580 network_create.go:116] failed to create docker network force-systemd-env-292000 192.168.76.0/24, will retry: subnet is taken
	I0729 11:56:05.494163   23580 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:56:05.494526   23580 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015b58d0}
	I0729 11:56:05.494538   23580 network_create.go:124] attempt to create docker network force-systemd-env-292000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0729 11:56:05.494607   23580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-292000 force-systemd-env-292000
	I0729 11:56:05.558795   23580 network_create.go:108] docker network force-systemd-env-292000 192.168.85.0/24 created
	I0729 11:56:05.558838   23580 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-292000" container
	I0729 11:56:05.558970   23580 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 11:56:05.578579   23580 cli_runner.go:164] Run: docker volume create force-systemd-env-292000 --label name.minikube.sigs.k8s.io=force-systemd-env-292000 --label created_by.minikube.sigs.k8s.io=true
	I0729 11:56:05.596714   23580 oci.go:103] Successfully created a docker volume force-systemd-env-292000
	I0729 11:56:05.596852   23580 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-292000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-292000 --entrypoint /usr/bin/test -v force-systemd-env-292000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 11:56:06.089985   23580 oci.go:107] Successfully prepared a docker volume force-systemd-env-292000
	I0729 11:56:06.090035   23580 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:56:06.090052   23580 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 11:56:06.090152   23580 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-292000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 12:02:05.418020   23580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:02:05.418195   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:02:05.438297   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:02:05.438436   23580 retry.go:31] will retry after 300.302213ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:05.741105   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:02:05.760915   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:02:05.761029   23580 retry.go:31] will retry after 482.778422ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:06.244117   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:02:06.262315   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:02:06.262427   23580 retry.go:31] will retry after 692.608038ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:06.955472   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:02:06.974520   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	W0729 12:02:06.974623   23580 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	
	W0729 12:02:06.974640   23580 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:06.974706   23580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 12:02:06.974769   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:02:06.992040   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:02:06.992133   23580 retry.go:31] will retry after 164.81993ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:07.158550   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:02:07.178391   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:02:07.178503   23580 retry.go:31] will retry after 522.236186ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:07.701914   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:02:07.721582   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:02:07.721687   23580 retry.go:31] will retry after 479.420342ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:08.203506   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:02:08.222687   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:02:08.222778   23580 retry.go:31] will retry after 518.902196ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:08.742056   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:02:08.760722   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	W0729 12:02:08.760831   23580 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	
	W0729 12:02:08.760848   23580 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:08.760867   23580 start.go:128] duration metric: took 6m3.387174971s to createHost
	I0729 12:02:08.760875   23580 start.go:83] releasing machines lock for "force-systemd-env-292000", held for 6m3.387277132s
	W0729 12:02:08.760889   23580 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0729 12:02:08.761326   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:08.778360   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	I0729 12:02:08.778420   23580 delete.go:82] Unable to get host status for force-systemd-env-292000, assuming it has already been deleted: state: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	W0729 12:02:08.778516   23580 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0729 12:02:08.778526   23580 start.go:729] Will try again in 5 seconds ...
	I0729 12:02:13.779286   23580 start.go:360] acquireMachinesLock for force-systemd-env-292000: {Name:mkce1deeb1c3480f7fd03327891d68f1e46690de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 12:02:13.779475   23580 start.go:364] duration metric: took 152.719µs to acquireMachinesLock for "force-systemd-env-292000"
	I0729 12:02:13.779510   23580 start.go:96] Skipping create...Using existing machine configuration
	I0729 12:02:13.779526   23580 fix.go:54] fixHost starting: 
	I0729 12:02:13.779939   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:13.798147   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	I0729 12:02:13.798207   23580 fix.go:112] recreateIfNeeded on force-systemd-env-292000: state= err=unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:13.798230   23580 fix.go:117] machineExists: false. err=machine does not exist
	I0729 12:02:13.820591   23580 out.go:177] * docker "force-systemd-env-292000" container is missing, will recreate.
	I0729 12:02:13.841856   23580 delete.go:124] DEMOLISHING force-systemd-env-292000 ...
	I0729 12:02:13.842067   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:13.860736   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	W0729 12:02:13.860790   23580 stop.go:83] unable to get state: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:13.860808   23580 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:13.861193   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:13.878216   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	I0729 12:02:13.878276   23580 delete.go:82] Unable to get host status for force-systemd-env-292000, assuming it has already been deleted: state: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:13.878372   23580 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-292000
	W0729 12:02:13.895375   23580 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-292000 returned with exit code 1
	I0729 12:02:13.895408   23580 kic.go:371] could not find the container force-systemd-env-292000 to remove it. will try anyways
	I0729 12:02:13.895492   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:13.912151   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	W0729 12:02:13.912205   23580 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:13.912300   23580 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-292000 /bin/bash -c "sudo init 0"
	W0729 12:02:13.929538   23580 cli_runner.go:211] docker exec --privileged -t force-systemd-env-292000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 12:02:13.929577   23580 oci.go:650] error shutdown force-systemd-env-292000: docker exec --privileged -t force-systemd-env-292000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:14.930599   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:14.949791   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	I0729 12:02:14.949842   23580 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:14.949857   23580 oci.go:664] temporary error: container force-systemd-env-292000 status is  but expect it to be exited
	I0729 12:02:14.949896   23580 retry.go:31] will retry after 725.246012ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:15.677507   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:15.697106   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	I0729 12:02:15.697152   23580 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:15.697165   23580 oci.go:664] temporary error: container force-systemd-env-292000 status is  but expect it to be exited
	I0729 12:02:15.697199   23580 retry.go:31] will retry after 1.064908029s: couldn't verify container is exited. %v: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:16.764428   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:16.783968   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	I0729 12:02:16.784020   23580 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:16.784030   23580 oci.go:664] temporary error: container force-systemd-env-292000 status is  but expect it to be exited
	I0729 12:02:16.784055   23580 retry.go:31] will retry after 717.073966ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:17.503504   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:17.523731   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	I0729 12:02:17.523781   23580 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:17.523790   23580 oci.go:664] temporary error: container force-systemd-env-292000 status is  but expect it to be exited
	I0729 12:02:17.523817   23580 retry.go:31] will retry after 2.33429136s: couldn't verify container is exited. %v: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:19.860449   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:19.880327   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	I0729 12:02:19.880377   23580 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:19.880391   23580 oci.go:664] temporary error: container force-systemd-env-292000 status is  but expect it to be exited
	I0729 12:02:19.880428   23580 retry.go:31] will retry after 2.294566075s: couldn't verify container is exited. %v: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:22.175315   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:22.194392   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	I0729 12:02:22.194441   23580 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:22.194451   23580 oci.go:664] temporary error: container force-systemd-env-292000 status is  but expect it to be exited
	I0729 12:02:22.194478   23580 retry.go:31] will retry after 3.347860558s: couldn't verify container is exited. %v: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:25.544127   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:25.564241   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	I0729 12:02:25.564289   23580 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:25.564305   23580 oci.go:664] temporary error: container force-systemd-env-292000 status is  but expect it to be exited
	I0729 12:02:25.564335   23580 retry.go:31] will retry after 8.293419285s: couldn't verify container is exited. %v: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:33.860110   23580 cli_runner.go:164] Run: docker container inspect force-systemd-env-292000 --format={{.State.Status}}
	W0729 12:02:33.878791   23580 cli_runner.go:211] docker container inspect force-systemd-env-292000 --format={{.State.Status}} returned with exit code 1
	I0729 12:02:33.878844   23580 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:02:33.878852   23580 oci.go:664] temporary error: container force-systemd-env-292000 status is  but expect it to be exited
	I0729 12:02:33.878880   23580 oci.go:88] couldn't shut down force-systemd-env-292000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	 
	I0729 12:02:33.878961   23580 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-292000
	I0729 12:02:33.896342   23580 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-292000
	W0729 12:02:33.913351   23580 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-292000 returned with exit code 1
	I0729 12:02:33.913461   23580 cli_runner.go:164] Run: docker network inspect force-systemd-env-292000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 12:02:33.930884   23580 cli_runner.go:164] Run: docker network rm force-systemd-env-292000
	I0729 12:02:34.008847   23580 fix.go:124] Sleeping 1 second for extra luck!
	I0729 12:02:35.009803   23580 start.go:125] createHost starting for "" (driver="docker")
	I0729 12:02:35.031998   23580 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 12:02:35.032199   23580 start.go:159] libmachine.API.Create for "force-systemd-env-292000" (driver="docker")
	I0729 12:02:35.032233   23580 client.go:168] LocalClient.Create starting
	I0729 12:02:35.032456   23580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 12:02:35.032562   23580 main.go:141] libmachine: Decoding PEM data...
	I0729 12:02:35.032591   23580 main.go:141] libmachine: Parsing certificate...
	I0729 12:02:35.032679   23580 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 12:02:35.032759   23580 main.go:141] libmachine: Decoding PEM data...
	I0729 12:02:35.032773   23580 main.go:141] libmachine: Parsing certificate...
	I0729 12:02:35.033583   23580 cli_runner.go:164] Run: docker network inspect force-systemd-env-292000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 12:02:35.052333   23580 cli_runner.go:211] docker network inspect force-systemd-env-292000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 12:02:35.052432   23580 network_create.go:284] running [docker network inspect force-systemd-env-292000] to gather additional debugging logs...
	I0729 12:02:35.052450   23580 cli_runner.go:164] Run: docker network inspect force-systemd-env-292000
	W0729 12:02:35.069757   23580 cli_runner.go:211] docker network inspect force-systemd-env-292000 returned with exit code 1
	I0729 12:02:35.069787   23580 network_create.go:287] error running [docker network inspect force-systemd-env-292000]: docker network inspect force-systemd-env-292000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-292000 not found
	I0729 12:02:35.069798   23580 network_create.go:289] output of [docker network inspect force-systemd-env-292000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-292000 not found
	
	** /stderr **
	I0729 12:02:35.069948   23580 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 12:02:35.089141   23580 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:02:35.090720   23580 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:02:35.092162   23580 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:02:35.093719   23580 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:02:35.095357   23580 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:02:35.097124   23580 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:02:35.099018   23580 network.go:209] skipping subnet 192.168.103.0/24 that is reserved: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 12:02:35.100233   23580 network.go:206] using free private subnet 192.168.112.0/24: &{IP:192.168.112.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.112.0/24 Gateway:192.168.112.1 ClientMin:192.168.112.2 ClientMax:192.168.112.254 Broadcast:192.168.112.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015b5dd0}
	I0729 12:02:35.100259   23580 network_create.go:124] attempt to create docker network force-systemd-env-292000 192.168.112.0/24 with gateway 192.168.112.1 and MTU of 65535 ...
	I0729 12:02:35.100386   23580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.112.0/24 --gateway=192.168.112.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-292000 force-systemd-env-292000
	I0729 12:02:35.163812   23580 network_create.go:108] docker network force-systemd-env-292000 192.168.112.0/24 created
	I0729 12:02:35.163854   23580 kic.go:121] calculated static IP "192.168.112.2" for the "force-systemd-env-292000" container
	I0729 12:02:35.163970   23580 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 12:02:35.183334   23580 cli_runner.go:164] Run: docker volume create force-systemd-env-292000 --label name.minikube.sigs.k8s.io=force-systemd-env-292000 --label created_by.minikube.sigs.k8s.io=true
	I0729 12:02:35.200351   23580 oci.go:103] Successfully created a docker volume force-systemd-env-292000
	I0729 12:02:35.200474   23580 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-292000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-292000 --entrypoint /usr/bin/test -v force-systemd-env-292000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 12:02:35.472323   23580 oci.go:107] Successfully prepared a docker volume force-systemd-env-292000
	I0729 12:02:35.472355   23580 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 12:02:35.472374   23580 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 12:02:35.472474   23580 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-292000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 12:08:35.103128   23580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:08:35.103257   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:35.123868   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:08:35.123989   23580 retry.go:31] will retry after 181.019808ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:35.306671   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:35.326155   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:08:35.326277   23580 retry.go:31] will retry after 545.787576ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:35.874522   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:35.895380   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:08:35.895485   23580 retry.go:31] will retry after 420.563903ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:36.316419   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:36.336691   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	W0729 12:08:36.336808   23580 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	
	W0729 12:08:36.336826   23580 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:36.336893   23580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 12:08:36.336946   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:36.354330   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:08:36.354443   23580 retry.go:31] will retry after 219.398666ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:36.574254   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:36.593809   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:08:36.593911   23580 retry.go:31] will retry after 413.832411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:37.010214   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:37.029879   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:08:37.029974   23580 retry.go:31] will retry after 789.60622ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:37.822017   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:37.842037   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	W0729 12:08:37.842146   23580 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	
	W0729 12:08:37.842165   23580 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:37.842175   23580 start.go:128] duration metric: took 6m2.763368876s to createHost
	I0729 12:08:37.842242   23580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 12:08:37.842304   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:37.859512   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:08:37.859621   23580 retry.go:31] will retry after 213.382639ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:38.075460   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:38.095257   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:08:38.095354   23580 retry.go:31] will retry after 560.830206ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:38.658639   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:38.678463   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:08:38.678566   23580 retry.go:31] will retry after 308.313042ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:38.988509   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:39.008131   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	W0729 12:08:39.008229   23580 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	
	W0729 12:08:39.008247   23580 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:39.008330   23580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 12:08:39.008385   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:39.026243   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:08:39.026339   23580 retry.go:31] will retry after 256.949184ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:39.285094   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:39.304486   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:08:39.304577   23580 retry.go:31] will retry after 347.696432ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:39.653516   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:39.680927   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	I0729 12:08:39.681019   23580 retry.go:31] will retry after 549.393445ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:40.232375   23580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000
	W0729 12:08:40.252080   23580 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000 returned with exit code 1
	W0729 12:08:40.252190   23580 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	
	W0729 12:08:40.252206   23580 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-292000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-292000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	I0729 12:08:40.252220   23580 fix.go:56] duration metric: took 6m26.403736746s for fixHost
	I0729 12:08:40.252227   23580 start.go:83] releasing machines lock for "force-systemd-env-292000", held for 6m26.403782362s
	W0729 12:08:40.252307   23580 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-292000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-292000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 12:08:40.295749   23580 out.go:177] 
	W0729 12:08:40.316931   23580 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 12:08:40.316970   23580 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0729 12:08:40.317017   23580 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0729 12:08:40.338932   23580 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-292000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-292000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-292000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (160.02886ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-292000 host status: state: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-292000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-29 12:08:40.555114 -0700 PDT m=+6185.675218020
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-292000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-292000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-292000",
	        "Id": "2f7ce0b337ff78370062f8201f76834c9fadc10bae8a418237861c2a6ebee38e",
	        "Created": "2024-07-29T19:02:35.117338421Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.112.0/24",
	                    "Gateway": "192.168.112.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-292000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-292000 -n force-systemd-env-292000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-292000 -n force-systemd-env-292000: exit status 7 (73.612421ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 12:08:40.648322   24189 status.go:249] status error: host: state: unknown state "force-systemd-env-292000": docker container inspect force-systemd-env-292000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-292000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-292000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-292000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-292000
--- FAIL: TestForceSystemdEnv (756.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (885.62s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-730000 ssh -- ls /minikube-host
E0729 10:54:55.815464   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:55:30.860784   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:56:53.907896   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:59:55.818565   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 11:00:30.862118   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 11:04:55.821846   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 11:05:30.866967   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-730000 ssh -- ls /minikube-host: signal: killed (14m45.348452794s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-730000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-730000
helpers_test.go:235: (dbg) docker inspect mount-start-2-730000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e51c223f0a4061e50e02f701a06457c1490d62bd23e0eac68985ba088ab351ae",
	        "Created": "2024-07-29T17:52:20.974869757Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501047,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-29T17:52:21.068017867Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f7a7de1851ee150766e4477ba0f200b8a850318ef537b8ef6899afcaea59940a",
	        "ResolvConfPath": "/var/lib/docker/containers/e51c223f0a4061e50e02f701a06457c1490d62bd23e0eac68985ba088ab351ae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e51c223f0a4061e50e02f701a06457c1490d62bd23e0eac68985ba088ab351ae/hostname",
	        "HostsPath": "/var/lib/docker/containers/e51c223f0a4061e50e02f701a06457c1490d62bd23e0eac68985ba088ab351ae/hosts",
	        "LogPath": "/var/lib/docker/containers/e51c223f0a4061e50e02f701a06457c1490d62bd23e0eac68985ba088ab351ae/e51c223f0a4061e50e02f701a06457c1490d62bd23e0eac68985ba088ab351ae-json.log",
	        "Name": "/mount-start-2-730000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-730000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-730000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/55cf280416fd7077703d529e734127d5e59424c01edc70d311a15d1a1092958e-init/diff:/var/lib/docker/overlay2/5df2debeeec49de66d0952d1582bf6a1a2ddda887655938b9baf9629274d81c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55cf280416fd7077703d529e734127d5e59424c01edc70d311a15d1a1092958e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55cf280416fd7077703d529e734127d5e59424c01edc70d311a15d1a1092958e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55cf280416fd7077703d529e734127d5e59424c01edc70d311a15d1a1092958e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-730000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-730000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-730000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-730000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-730000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a07cb78eed67becf7d6f8b428a842d13d9dd5e6595cf6658c2372d3d9d4add9",
	            "SandboxKey": "/var/run/docker/netns/5a07cb78eed6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58482"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58483"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58484"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58485"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58486"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-730000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "781ba6ac92dc0ff44c890ada8e5c2a19734ee3762bf8cf37d52ffc3e54dbc2b8",
	                    "EndpointID": "868d7ddbe99b4395a41138fdf3fb21363e186b79bbf927925e5d234e13f7bcf7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "mount-start-2-730000",
	                        "e51c223f0a40"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-730000 -n mount-start-2-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-730000 -n mount-start-2-730000: exit status 6 (246.750612ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:07:12.057556   21839 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-730000" does not appear in /Users/jenkins/minikube-integration/19338-16127/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-730000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountSecond (885.62s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (752.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-452000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0729 11:09:55.825063   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 11:10:30.868385   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 11:13:33.958212   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 11:14:55.866516   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 11:15:30.911690   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 11:19:55.868319   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 11:20:30.913393   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-452000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m32.072370578s)

                                                
                                                
-- stdout --
	* [multinode-452000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-452000" primary control-plane node in "multinode-452000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-452000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:08:21.272993   21919 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:08:21.273184   21919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:08:21.273190   21919 out.go:304] Setting ErrFile to fd 2...
	I0729 11:08:21.273193   21919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:08:21.273364   21919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:08:21.274879   21919 out.go:298] Setting JSON to false
	I0729 11:08:21.298166   21919 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7671,"bootTime":1722268830,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 11:08:21.298386   21919 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:08:21.319477   21919 out.go:177] * [multinode-452000] minikube v1.33.1 on Darwin 14.5
	I0729 11:08:21.340582   21919 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 11:08:21.340640   21919 notify.go:220] Checking for updates...
	I0729 11:08:21.383329   21919 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	I0729 11:08:21.404663   21919 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 11:08:21.425661   21919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:08:21.446494   21919 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	I0729 11:08:21.467691   21919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:08:21.489164   21919 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:08:21.513508   21919 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 11:08:21.513694   21919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:08:21.595291   21919 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:74 SystemTime:2024-07-29 18:08:21.585620767 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 11:08:21.637263   21919 out.go:177] * Using the docker driver based on user configuration
	I0729 11:08:21.658332   21919 start.go:297] selected driver: docker
	I0729 11:08:21.658371   21919 start.go:901] validating driver "docker" against <nil>
	I0729 11:08:21.658387   21919 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:08:21.662866   21919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:08:21.740438   21919 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:74 SystemTime:2024-07-29 18:08:21.730978543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 11:08:21.740626   21919 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:08:21.740844   21919 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:08:21.762569   21919 out.go:177] * Using Docker Desktop driver with root privileges
	I0729 11:08:21.784370   21919 cni.go:84] Creating CNI manager for ""
	I0729 11:08:21.784400   21919 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 11:08:21.784412   21919 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 11:08:21.784571   21919 start.go:340] cluster config:
	{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:08:21.806234   21919 out.go:177] * Starting "multinode-452000" primary control-plane node in "multinode-452000" cluster
	I0729 11:08:21.848454   21919 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 11:08:21.870360   21919 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 11:08:21.912367   21919 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:08:21.912418   21919 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 11:08:21.912444   21919 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 11:08:21.912462   21919 cache.go:56] Caching tarball of preloaded images
	I0729 11:08:21.912689   21919 preload.go:172] Found /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 11:08:21.912708   21919 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 11:08:21.914414   21919 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/multinode-452000/config.json ...
	I0729 11:08:21.914502   21919 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/multinode-452000/config.json: {Name:mk01cc98a82b9d20e08af26cdd07b6f80fb7472f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0729 11:08:21.939932   21919 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 11:08:21.939946   21919 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 11:08:21.940088   21919 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 11:08:21.940106   21919 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 11:08:21.940111   21919 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 11:08:21.940119   21919 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 11:08:21.940124   21919 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 11:08:21.943286   21919 image.go:273] response: 
	I0729 11:08:22.070270   21919 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 11:08:22.070326   21919 cache.go:194] Successfully downloaded all kic artifacts
	I0729 11:08:22.070388   21919 start.go:360] acquireMachinesLock for multinode-452000: {Name:mk5fd3750c8f47c8f1a41d32cc701d419b8c2809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:08:22.070565   21919 start.go:364] duration metric: took 162.128µs to acquireMachinesLock for "multinode-452000"
	I0729 11:08:22.070593   21919 start.go:93] Provisioning new machine with config: &{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-452000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 11:08:22.070654   21919 start.go:125] createHost starting for "" (driver="docker")
	I0729 11:08:22.113052   21919 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0729 11:08:22.113263   21919 start.go:159] libmachine.API.Create for "multinode-452000" (driver="docker")
	I0729 11:08:22.113286   21919 client.go:168] LocalClient.Create starting
	I0729 11:08:22.113402   21919 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 11:08:22.113455   21919 main.go:141] libmachine: Decoding PEM data...
	I0729 11:08:22.113473   21919 main.go:141] libmachine: Parsing certificate...
	I0729 11:08:22.113522   21919 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 11:08:22.113561   21919 main.go:141] libmachine: Decoding PEM data...
	I0729 11:08:22.113571   21919 main.go:141] libmachine: Parsing certificate...
	I0729 11:08:22.114050   21919 cli_runner.go:164] Run: docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 11:08:22.131506   21919 cli_runner.go:211] docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 11:08:22.131624   21919 network_create.go:284] running [docker network inspect multinode-452000] to gather additional debugging logs...
	I0729 11:08:22.131643   21919 cli_runner.go:164] Run: docker network inspect multinode-452000
	W0729 11:08:22.148520   21919 cli_runner.go:211] docker network inspect multinode-452000 returned with exit code 1
	I0729 11:08:22.148544   21919 network_create.go:287] error running [docker network inspect multinode-452000]: docker network inspect multinode-452000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-452000 not found
	I0729 11:08:22.148559   21919 network_create.go:289] output of [docker network inspect multinode-452000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-452000 not found
	
	** /stderr **
	I0729 11:08:22.148681   21919 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:08:22.167483   21919 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:08:22.169101   21919 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:08:22.169456   21919 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001340980}
	I0729 11:08:22.169472   21919 network_create.go:124] attempt to create docker network multinode-452000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0729 11:08:22.169540   21919 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-452000 multinode-452000
	I0729 11:08:22.232158   21919 network_create.go:108] docker network multinode-452000 192.168.67.0/24 created
	I0729 11:08:22.232196   21919 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-452000" container
	I0729 11:08:22.232315   21919 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 11:08:22.249950   21919 cli_runner.go:164] Run: docker volume create multinode-452000 --label name.minikube.sigs.k8s.io=multinode-452000 --label created_by.minikube.sigs.k8s.io=true
	I0729 11:08:22.267804   21919 oci.go:103] Successfully created a docker volume multinode-452000
	I0729 11:08:22.267919   21919 cli_runner.go:164] Run: docker run --rm --name multinode-452000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-452000 --entrypoint /usr/bin/test -v multinode-452000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 11:08:22.654608   21919 oci.go:107] Successfully prepared a docker volume multinode-452000
	I0729 11:08:22.654662   21919 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:08:22.654683   21919 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 11:08:22.654882   21919 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-452000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 11:14:22.157423   21919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:14:22.157561   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:14:22.177532   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:14:22.177654   21919 retry.go:31] will retry after 326.749322ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:22.506853   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:14:22.526264   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:14:22.526375   21919 retry.go:31] will retry after 395.322164ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:22.924160   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:14:22.944222   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:14:22.944314   21919 retry.go:31] will retry after 299.504021ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:23.246191   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:14:23.265266   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:14:23.265388   21919 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:14:23.265409   21919 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:23.265473   21919 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 11:14:23.265543   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:14:23.282729   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:14:23.282826   21919 retry.go:31] will retry after 133.12554ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:23.418298   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:14:23.437307   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:14:23.437399   21919 retry.go:31] will retry after 331.186123ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:23.770985   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:14:23.789919   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:14:23.790019   21919 retry.go:31] will retry after 635.794109ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:24.428222   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:14:24.448520   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:14:24.448631   21919 retry.go:31] will retry after 708.649449ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:25.157499   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:14:25.176539   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:14:25.176642   21919 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:14:25.176658   21919 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:25.176677   21919 start.go:128] duration metric: took 6m3.063690459s to createHost
	I0729 11:14:25.176685   21919 start.go:83] releasing machines lock for "multinode-452000", held for 6m3.06379449s
	W0729 11:14:25.176700   21919 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0729 11:14:25.177162   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:25.194016   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:14:25.194064   21919 delete.go:82] Unable to get host status for multinode-452000, assuming it has already been deleted: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	W0729 11:14:25.194147   21919 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0729 11:14:25.194159   21919 start.go:729] Will try again in 5 seconds ...
	I0729 11:14:30.194472   21919 start.go:360] acquireMachinesLock for multinode-452000: {Name:mk5fd3750c8f47c8f1a41d32cc701d419b8c2809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:14:30.195582   21919 start.go:364] duration metric: took 153.641µs to acquireMachinesLock for "multinode-452000"
	I0729 11:14:30.195650   21919 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:14:30.195661   21919 fix.go:54] fixHost starting: 
	I0729 11:14:30.196004   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:30.215726   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:14:30.215770   21919 fix.go:112] recreateIfNeeded on multinode-452000: state= err=unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:30.215789   21919 fix.go:117] machineExists: false. err=machine does not exist
	I0729 11:14:30.237658   21919 out.go:177] * docker "multinode-452000" container is missing, will recreate.
	I0729 11:14:30.279448   21919 delete.go:124] DEMOLISHING multinode-452000 ...
	I0729 11:14:30.279629   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:30.297981   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	W0729 11:14:30.298035   21919 stop.go:83] unable to get state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:30.298058   21919 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:30.298447   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:30.315255   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:14:30.315302   21919 delete.go:82] Unable to get host status for multinode-452000, assuming it has already been deleted: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:30.315398   21919 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-452000
	W0729 11:14:30.332167   21919 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-452000 returned with exit code 1
	I0729 11:14:30.332203   21919 kic.go:371] could not find the container multinode-452000 to remove it. will try anyways
	I0729 11:14:30.332293   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:30.349261   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	W0729 11:14:30.349317   21919 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:30.349403   21919 cli_runner.go:164] Run: docker exec --privileged -t multinode-452000 /bin/bash -c "sudo init 0"
	W0729 11:14:30.366005   21919 cli_runner.go:211] docker exec --privileged -t multinode-452000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 11:14:30.366034   21919 oci.go:650] error shutdown multinode-452000: docker exec --privileged -t multinode-452000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:31.366322   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:31.384735   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:14:31.384780   21919 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:31.384795   21919 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:14:31.384816   21919 retry.go:31] will retry after 343.686118ms: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:31.730854   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:31.751633   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:14:31.751678   21919 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:31.751688   21919 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:14:31.751725   21919 retry.go:31] will retry after 894.881606ms: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:32.649062   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:32.668787   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:14:32.668846   21919 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:32.668859   21919 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:14:32.668882   21919 retry.go:31] will retry after 1.275165799s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:33.946451   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:33.966037   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:14:33.966080   21919 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:33.966089   21919 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:14:33.966116   21919 retry.go:31] will retry after 1.43716767s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:35.403783   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:35.423324   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:14:35.423382   21919 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:35.423393   21919 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:14:35.423415   21919 retry.go:31] will retry after 2.8258723s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:38.251139   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:38.271005   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:14:38.271050   21919 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:38.271059   21919 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:14:38.271082   21919 retry.go:31] will retry after 3.791329208s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:42.063646   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:42.084367   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:14:42.084410   21919 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:42.084424   21919 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:14:42.084450   21919 retry.go:31] will retry after 4.134133579s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:46.220106   21919 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:14:46.240059   21919 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:14:46.240104   21919 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:14:46.240118   21919 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:14:46.240148   21919 oci.go:88] couldn't shut down multinode-452000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	 
	I0729 11:14:46.240234   21919 cli_runner.go:164] Run: docker rm -f -v multinode-452000
	I0729 11:14:46.257626   21919 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-452000
	W0729 11:14:46.275962   21919 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-452000 returned with exit code 1
	I0729 11:14:46.276077   21919 cli_runner.go:164] Run: docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:14:46.293565   21919 cli_runner.go:164] Run: docker network rm multinode-452000
	I0729 11:14:46.372307   21919 fix.go:124] Sleeping 1 second for extra luck!
	I0729 11:14:47.374483   21919 start.go:125] createHost starting for "" (driver="docker")
	I0729 11:14:47.396833   21919 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0729 11:14:47.397024   21919 start.go:159] libmachine.API.Create for "multinode-452000" (driver="docker")
	I0729 11:14:47.397051   21919 client.go:168] LocalClient.Create starting
	I0729 11:14:47.397287   21919 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 11:14:47.397388   21919 main.go:141] libmachine: Decoding PEM data...
	I0729 11:14:47.397418   21919 main.go:141] libmachine: Parsing certificate...
	I0729 11:14:47.397507   21919 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 11:14:47.397590   21919 main.go:141] libmachine: Decoding PEM data...
	I0729 11:14:47.397618   21919 main.go:141] libmachine: Parsing certificate...
	I0729 11:14:47.418601   21919 cli_runner.go:164] Run: docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 11:14:47.438170   21919 cli_runner.go:211] docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 11:14:47.438264   21919 network_create.go:284] running [docker network inspect multinode-452000] to gather additional debugging logs...
	I0729 11:14:47.438282   21919 cli_runner.go:164] Run: docker network inspect multinode-452000
	W0729 11:14:47.455572   21919 cli_runner.go:211] docker network inspect multinode-452000 returned with exit code 1
	I0729 11:14:47.455599   21919 network_create.go:287] error running [docker network inspect multinode-452000]: docker network inspect multinode-452000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-452000 not found
	I0729 11:14:47.455614   21919 network_create.go:289] output of [docker network inspect multinode-452000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-452000 not found
	
	** /stderr **
	I0729 11:14:47.455771   21919 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:14:47.474822   21919 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:14:47.476347   21919 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:14:47.477662   21919 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:14:47.478028   21919 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015517e0}
	I0729 11:14:47.478041   21919 network_create.go:124] attempt to create docker network multinode-452000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0729 11:14:47.478108   21919 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-452000 multinode-452000
	W0729 11:14:47.496005   21919 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-452000 multinode-452000 returned with exit code 1
	W0729 11:14:47.496045   21919 network_create.go:149] failed to create docker network multinode-452000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-452000 multinode-452000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0729 11:14:47.496063   21919 network_create.go:116] failed to create docker network multinode-452000 192.168.76.0/24, will retry: subnet is taken
	I0729 11:14:47.497408   21919 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:14:47.497778   21919 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00079b470}
	I0729 11:14:47.497790   21919 network_create.go:124] attempt to create docker network multinode-452000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0729 11:14:47.497864   21919 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-452000 multinode-452000
	I0729 11:14:47.560526   21919 network_create.go:108] docker network multinode-452000 192.168.85.0/24 created
	I0729 11:14:47.560564   21919 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-452000" container
	I0729 11:14:47.560664   21919 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 11:14:47.578347   21919 cli_runner.go:164] Run: docker volume create multinode-452000 --label name.minikube.sigs.k8s.io=multinode-452000 --label created_by.minikube.sigs.k8s.io=true
	I0729 11:14:47.595414   21919 oci.go:103] Successfully created a docker volume multinode-452000
	I0729 11:14:47.595525   21919 cli_runner.go:164] Run: docker run --rm --name multinode-452000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-452000 --entrypoint /usr/bin/test -v multinode-452000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 11:14:47.838217   21919 oci.go:107] Successfully prepared a docker volume multinode-452000
	I0729 11:14:47.838250   21919 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:14:47.838268   21919 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 11:14:47.838413   21919 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-452000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 11:20:47.399742   21919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:20:47.399835   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:47.419403   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:20:47.419511   21919 retry.go:31] will retry after 310.112439ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:47.732025   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:47.751568   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:20:47.751672   21919 retry.go:31] will retry after 545.13381ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:48.299213   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:48.319270   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:20:48.319378   21919 retry.go:31] will retry after 836.834643ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:49.156682   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:49.176690   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:20:49.176800   21919 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:20:49.176817   21919 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:49.176876   21919 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 11:20:49.176929   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:49.193927   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:20:49.194041   21919 retry.go:31] will retry after 357.422357ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:49.553923   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:49.573713   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:20:49.573820   21919 retry.go:31] will retry after 290.062271ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:49.866308   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:49.885398   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:20:49.885495   21919 retry.go:31] will retry after 575.733041ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:50.463684   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:50.483148   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:20:50.483249   21919 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:20:50.483269   21919 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:50.483281   21919 start.go:128] duration metric: took 6m3.106656524s to createHost
	I0729 11:20:50.483359   21919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:20:50.483418   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:50.501215   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:20:50.501304   21919 retry.go:31] will retry after 311.560857ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:50.815181   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:50.834232   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:20:50.834333   21919 retry.go:31] will retry after 528.011361ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:51.364780   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:51.384192   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:20:51.384284   21919 retry.go:31] will retry after 318.229915ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:51.704851   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:51.723280   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:20:51.723381   21919 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:20:51.723398   21919 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:51.723467   21919 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 11:20:51.723528   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:51.740514   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:20:51.740602   21919 retry.go:31] will retry after 240.95484ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:51.981935   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:52.001996   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:20:52.002092   21919 retry.go:31] will retry after 529.553409ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:52.533307   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:52.552888   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:20:52.552990   21919 retry.go:31] will retry after 610.984541ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:53.164417   21919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:20:53.186522   21919 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:20:53.186634   21919 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:20:53.186649   21919 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:20:53.186670   21919 fix.go:56] duration metric: took 6m22.988800459s for fixHost
	I0729 11:20:53.186677   21919 start.go:83] releasing machines lock for "multinode-452000", held for 6m22.988877387s
	W0729 11:20:53.186762   21919 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-452000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-452000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 11:20:53.230235   21919 out.go:177] 
	W0729 11:20:53.251241   21919 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 11:20:53.251302   21919 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0729 11:20:53.251335   21919 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0729 11:20:53.272320   21919 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-452000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "f760315f062927c94dda09aefa3b5e067ee662ee926915c4e3c9c5c1ae0e389a",
	        "Created": "2024-07-29T18:14:47.513067149Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (74.375456ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:20:53.440716   22082 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (752.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (92.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (102.206917ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-452000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- rollout status deployment/busybox: exit status 1 (101.434608ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.951917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.443922ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.510558ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.356929ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.845283ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.644255ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.093598ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.379565ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.220194ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.504389ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (102.761105ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- exec  -- nslookup kubernetes.io: exit status 1 (101.461172ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- exec  -- nslookup kubernetes.default: exit status 1 (101.232037ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (100.844454ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "f760315f062927c94dda09aefa3b5e067ee662ee926915c4e3c9c5c1ae0e389a",
	        "Created": "2024-07-29T18:14:47.513067149Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (115.873358ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:22:25.611389   22162 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (92.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-452000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (102.024904ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-452000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "f760315f062927c94dda09aefa3b5e067ee662ee926915c4e3c9c5c1ae0e389a",
	        "Created": "2024-07-29T18:14:47.513067149Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (73.906568ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:22:25.808534   22169 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-452000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-452000 -v 3 --alsologtostderr: exit status 80 (161.520155ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:22:25.865371   22172 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:22:25.865644   22172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:25.865650   22172 out.go:304] Setting ErrFile to fd 2...
	I0729 11:22:25.865658   22172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:25.865843   22172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:22:25.866200   22172 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:22:25.866475   22172 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:22:25.866854   22172 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:22:25.883818   22172 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:22:25.905891   22172 out.go:177] 
	W0729 11:22:25.927671   22172 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-452000 host status: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-452000 host status: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	I0729 11:22:25.948579   22172 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-452000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "f760315f062927c94dda09aefa3b5e067ee662ee926915c4e3c9c5c1ae0e389a",
	        "Created": "2024-07-29T18:14:47.513067149Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (74.030049ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:22:26.065906   22176 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-452000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-452000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (37.519116ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-452000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-452000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-452000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "f760315f062927c94dda09aefa3b5e067ee662ee926915c4e3c9c5c1ae0e389a",
	        "Created": "2024-07-29T18:14:47.513067149Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (74.740279ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:22:26.199598   22181 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-452000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-730000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-452000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-452000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-452000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "f760315f062927c94dda09aefa3b5e067ee662ee926915c4e3c9c5c1ae0e389a",
	        "Created": "2024-07-29T18:14:47.513067149Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (75.083608ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:22:26.412797   22189 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status --output json --alsologtostderr: exit status 7 (75.663543ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-452000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:22:26.468861   22192 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:22:26.469140   22192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:26.469145   22192 out.go:304] Setting ErrFile to fd 2...
	I0729 11:22:26.469149   22192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:26.469316   22192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:22:26.469506   22192 out.go:298] Setting JSON to true
	I0729 11:22:26.469528   22192 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:22:26.469568   22192 notify.go:220] Checking for updates...
	I0729 11:22:26.469796   22192 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:22:26.469810   22192 status.go:255] checking status of multinode-452000 ...
	I0729 11:22:26.470206   22192 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:22:26.488484   22192 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:22:26.488560   22192 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:22:26.488580   22192 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:22:26.488605   22192 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:22:26.488615   22192 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-452000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "f760315f062927c94dda09aefa3b5e067ee662ee926915c4e3c9c5c1ae0e389a",
	        "Created": "2024-07-29T18:14:47.513067149Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (76.203418ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:22:26.586138   22196 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 node stop m03: exit status 85 (151.155154ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-452000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status: exit status 7 (75.253268ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:22:26.813943   22201 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:22:26.813954   22201 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status --alsologtostderr: exit status 7 (74.497926ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:22:26.869731   22204 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:22:26.870000   22204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:26.870006   22204 out.go:304] Setting ErrFile to fd 2...
	I0729 11:22:26.870009   22204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:26.870186   22204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:22:26.870373   22204 out.go:298] Setting JSON to false
	I0729 11:22:26.870395   22204 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:22:26.870432   22204 notify.go:220] Checking for updates...
	I0729 11:22:26.870681   22204 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:22:26.870695   22204 status.go:255] checking status of multinode-452000 ...
	I0729 11:22:26.871066   22204 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:22:26.888492   22204 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:22:26.888563   22204 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:22:26.888582   22204 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:22:26.888609   22204 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:22:26.888617   22204 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-452000 status --alsologtostderr": multinode-452000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-452000 status --alsologtostderr": multinode-452000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-452000 status --alsologtostderr": multinode-452000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "f760315f062927c94dda09aefa3b5e067ee662ee926915c4e3c9c5c1ae0e389a",
	        "Created": "2024-07-29T18:14:47.513067149Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (75.288495ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:22:26.984955   22208 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (52.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 node start m03 -v=7 --alsologtostderr: exit status 85 (148.691743ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:22:27.040434   22211 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:22:27.041395   22211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:27.041402   22211 out.go:304] Setting ErrFile to fd 2...
	I0729 11:22:27.041406   22211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:27.041591   22211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:22:27.041933   22211 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:22:27.042235   22211 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:22:27.064049   22211 out.go:177] 
	W0729 11:22:27.085153   22211 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0729 11:22:27.085177   22211 out.go:239] * 
	* 
	W0729 11:22:27.090709   22211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:22:27.111927   22211 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0729 11:22:27.040434   22211 out.go:291] Setting OutFile to fd 1 ...
I0729 11:22:27.041395   22211 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 11:22:27.041402   22211 out.go:304] Setting ErrFile to fd 2...
I0729 11:22:27.041406   22211 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 11:22:27.041591   22211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
I0729 11:22:27.041933   22211 mustload.go:65] Loading cluster: multinode-452000
I0729 11:22:27.042235   22211 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 11:22:27.064049   22211 out.go:177] 
W0729 11:22:27.085153   22211 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0729 11:22:27.085177   22211 out.go:239] * 
* 
W0729 11:22:27.090709   22211 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 11:22:27.111927   22211 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-452000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (75.460763ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:22:27.190391   22213 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:22:27.190598   22213 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:27.190603   22213 out.go:304] Setting ErrFile to fd 2...
	I0729 11:22:27.190607   22213 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:27.190792   22213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:22:27.190984   22213 out.go:298] Setting JSON to false
	I0729 11:22:27.191005   22213 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:22:27.191045   22213 notify.go:220] Checking for updates...
	I0729 11:22:27.191305   22213 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:22:27.191320   22213 status.go:255] checking status of multinode-452000 ...
	I0729 11:22:27.191740   22213 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:22:27.209521   22213 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:22:27.209587   22213 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:22:27.209607   22213 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:22:27.209635   22213 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:22:27.209646   22213 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (79.772446ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:22:28.091006   22218 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:22:28.091180   22218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:28.091186   22218 out.go:304] Setting ErrFile to fd 2...
	I0729 11:22:28.091190   22218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:28.091366   22218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:22:28.091561   22218 out.go:298] Setting JSON to false
	I0729 11:22:28.091584   22218 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:22:28.091623   22218 notify.go:220] Checking for updates...
	I0729 11:22:28.091856   22218 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:22:28.091870   22218 status.go:255] checking status of multinode-452000 ...
	I0729 11:22:28.092249   22218 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:22:28.109875   22218 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:22:28.109939   22218 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:22:28.109959   22218 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:22:28.109981   22218 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:22:28.109991   22218 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (79.983919ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:22:29.423799   22221 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:22:29.424070   22221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:29.424075   22221 out.go:304] Setting ErrFile to fd 2...
	I0729 11:22:29.424079   22221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:29.424244   22221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:22:29.424432   22221 out.go:298] Setting JSON to false
	I0729 11:22:29.424454   22221 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:22:29.424495   22221 notify.go:220] Checking for updates...
	I0729 11:22:29.424752   22221 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:22:29.424766   22221 status.go:255] checking status of multinode-452000 ...
	I0729 11:22:29.425181   22221 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:22:29.444309   22221 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:22:29.444376   22221 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:22:29.444396   22221 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:22:29.444421   22221 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:22:29.444430   22221 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (82.338057ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:22:31.635651   22226 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:22:31.635878   22226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:31.635884   22226 out.go:304] Setting ErrFile to fd 2...
	I0729 11:22:31.635887   22226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:31.636082   22226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:22:31.636283   22226 out.go:298] Setting JSON to false
	I0729 11:22:31.636307   22226 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:22:31.636343   22226 notify.go:220] Checking for updates...
	I0729 11:22:31.636583   22226 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:22:31.636599   22226 status.go:255] checking status of multinode-452000 ...
	I0729 11:22:31.636996   22226 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:22:31.655852   22226 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:22:31.655911   22226 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:22:31.655935   22226 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:22:31.655959   22226 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:22:31.655967   22226 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (77.209114ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:22:35.615186   22231 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:22:35.615466   22231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:35.615472   22231 out.go:304] Setting ErrFile to fd 2...
	I0729 11:22:35.615475   22231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:35.615666   22231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:22:35.615893   22231 out.go:298] Setting JSON to false
	I0729 11:22:35.615915   22231 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:22:35.615954   22231 notify.go:220] Checking for updates...
	I0729 11:22:35.616175   22231 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:22:35.616189   22231 status.go:255] checking status of multinode-452000 ...
	I0729 11:22:35.616598   22231 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:22:35.634148   22231 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:22:35.634215   22231 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:22:35.634233   22231 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:22:35.634255   22231 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:22:35.634262   22231 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (78.356961ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:22:38.326810   22235 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:22:38.327103   22235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:38.327109   22235 out.go:304] Setting ErrFile to fd 2...
	I0729 11:22:38.327113   22235 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:38.327306   22235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:22:38.327497   22235 out.go:298] Setting JSON to false
	I0729 11:22:38.327519   22235 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:22:38.327563   22235 notify.go:220] Checking for updates...
	I0729 11:22:38.327799   22235 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:22:38.327813   22235 status.go:255] checking status of multinode-452000 ...
	I0729 11:22:38.328215   22235 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:22:38.347421   22235 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:22:38.347503   22235 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:22:38.347533   22235 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:22:38.347560   22235 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:22:38.347568   22235 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (77.885036ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:22:45.671367   22244 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:22:45.671585   22244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:45.671590   22244 out.go:304] Setting ErrFile to fd 2...
	I0729 11:22:45.671613   22244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:45.671797   22244 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:22:45.672020   22244 out.go:298] Setting JSON to false
	I0729 11:22:45.672062   22244 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:22:45.672108   22244 notify.go:220] Checking for updates...
	I0729 11:22:45.672412   22244 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:22:45.672446   22244 status.go:255] checking status of multinode-452000 ...
	I0729 11:22:45.672850   22244 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:22:45.690494   22244 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:22:45.690555   22244 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:22:45.690579   22244 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:22:45.690607   22244 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:22:45.690615   22244 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (79.777561ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:22:59.333619   22251 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:22:59.333854   22251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:59.333860   22251 out.go:304] Setting ErrFile to fd 2...
	I0729 11:22:59.333864   22251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:22:59.334038   22251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:22:59.334215   22251 out.go:298] Setting JSON to false
	I0729 11:22:59.334237   22251 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:22:59.334271   22251 notify.go:220] Checking for updates...
	I0729 11:22:59.334511   22251 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:22:59.334525   22251 status.go:255] checking status of multinode-452000 ...
	I0729 11:22:59.334931   22251 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:22:59.353765   22251 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:22:59.353833   22251 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:22:59.353859   22251 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:22:59.353884   22251 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:22:59.353892   22251 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr: exit status 7 (82.080132ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:23:19.257882   22260 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:23:19.258157   22260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:23:19.258162   22260 out.go:304] Setting ErrFile to fd 2...
	I0729 11:23:19.258166   22260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:23:19.258362   22260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:23:19.258562   22260 out.go:298] Setting JSON to false
	I0729 11:23:19.258583   22260 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:23:19.258624   22260 notify.go:220] Checking for updates...
	I0729 11:23:19.258868   22260 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:23:19.258881   22260 status.go:255] checking status of multinode-452000 ...
	I0729 11:23:19.259277   22260 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:19.278169   22260 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:23:19.278219   22260 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:23:19.278247   22260 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:23:19.278272   22260 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:23:19.278278   22260 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-452000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "f760315f062927c94dda09aefa3b5e067ee662ee926915c4e3c9c5c1ae0e389a",
	        "Created": "2024-07-29T18:14:47.513067149Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (74.265893ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:23:19.373412   22264 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (52.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (790.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-452000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-452000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-452000: exit status 82 (14.727849202s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-452000"  ...
	* Stopping node "multinode-452000"  ...
	* Stopping node "multinode-452000"  ...
	* Stopping node "multinode-452000"  ...
	* Stopping node "multinode-452000"  ...
	* Stopping node "multinode-452000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-452000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-452000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-452000 --wait=true -v=8 --alsologtostderr
E0729 11:24:38.922146   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 11:24:55.869964   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 11:25:30.915152   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 11:29:55.869998   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 11:30:13.966685   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 11:30:30.916236   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 11:34:55.873467   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 11:35:30.917788   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-452000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m55.82379191s)

                                                
                                                
-- stdout --
	* [multinode-452000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-452000" primary control-plane node in "multinode-452000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* docker "multinode-452000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-452000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:23:34.215598   22286 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:23:34.215852   22286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:23:34.215858   22286 out.go:304] Setting ErrFile to fd 2...
	I0729 11:23:34.215861   22286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:23:34.216025   22286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:23:34.217474   22286 out.go:298] Setting JSON to false
	I0729 11:23:34.239916   22286 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":8584,"bootTime":1722268830,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 11:23:34.239995   22286 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:23:34.261883   22286 out.go:177] * [multinode-452000] minikube v1.33.1 on Darwin 14.5
	I0729 11:23:34.303724   22286 notify.go:220] Checking for updates...
	I0729 11:23:34.324696   22286 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 11:23:34.345700   22286 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	I0729 11:23:34.366676   22286 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 11:23:34.387660   22286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:23:34.408745   22286 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	I0729 11:23:34.433832   22286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:23:34.454287   22286 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:23:34.454445   22286 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:23:34.478532   22286 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 11:23:34.478703   22286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:23:34.559267   22286 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:false NGoroutines:98 SystemTime:2024-07-29 18:23:34.549721594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 11:23:34.580081   22286 out.go:177] * Using the docker driver based on existing profile
	I0729 11:23:34.621891   22286 start.go:297] selected driver: docker
	I0729 11:23:34.621918   22286 start.go:901] validating driver "docker" against &{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-452000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:23:34.622035   22286 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:23:34.622248   22286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:23:34.704054   22286 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:false NGoroutines:98 SystemTime:2024-07-29 18:23:34.695154111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 11:23:34.707076   22286 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:23:34.707110   22286 cni.go:84] Creating CNI manager for ""
	I0729 11:23:34.707118   22286 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 11:23:34.707198   22286 start.go:340] cluster config:
	{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:23:34.749769   22286 out.go:177] * Starting "multinode-452000" primary control-plane node in "multinode-452000" cluster
	I0729 11:23:34.771047   22286 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 11:23:34.792134   22286 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 11:23:34.834141   22286 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:23:34.834227   22286 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 11:23:34.834251   22286 cache.go:56] Caching tarball of preloaded images
	I0729 11:23:34.834224   22286 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 11:23:34.834482   22286 preload.go:172] Found /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 11:23:34.834502   22286 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 11:23:34.834662   22286 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/multinode-452000/config.json ...
	W0729 11:23:34.859994   22286 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 11:23:34.860015   22286 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 11:23:34.860128   22286 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 11:23:34.860145   22286 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 11:23:34.860151   22286 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 11:23:34.860159   22286 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 11:23:34.860164   22286 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 11:23:34.863305   22286 image.go:273] response: 
	I0729 11:23:35.454543   22286 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 11:23:35.454603   22286 cache.go:194] Successfully downloaded all kic artifacts
	I0729 11:23:35.454647   22286 start.go:360] acquireMachinesLock for multinode-452000: {Name:mk5fd3750c8f47c8f1a41d32cc701d419b8c2809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:23:35.454752   22286 start.go:364] duration metric: took 85.679µs to acquireMachinesLock for "multinode-452000"
	I0729 11:23:35.454783   22286 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:23:35.454794   22286 fix.go:54] fixHost starting: 
	I0729 11:23:35.455053   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:35.472184   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:23:35.472253   22286 fix.go:112] recreateIfNeeded on multinode-452000: state= err=unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:35.472270   22286 fix.go:117] machineExists: false. err=machine does not exist
	I0729 11:23:35.494136   22286 out.go:177] * docker "multinode-452000" container is missing, will recreate.
	I0729 11:23:35.514452   22286 delete.go:124] DEMOLISHING multinode-452000 ...
	I0729 11:23:35.514562   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:35.531974   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	W0729 11:23:35.532031   22286 stop.go:83] unable to get state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:35.532044   22286 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:35.532418   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:35.609378   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:23:35.609435   22286 delete.go:82] Unable to get host status for multinode-452000, assuming it has already been deleted: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:35.609526   22286 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-452000
	W0729 11:23:35.626623   22286 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-452000 returned with exit code 1
	I0729 11:23:35.626661   22286 kic.go:371] could not find the container multinode-452000 to remove it. will try anyways
	I0729 11:23:35.626737   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:35.644230   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	W0729 11:23:35.644280   22286 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:35.644362   22286 cli_runner.go:164] Run: docker exec --privileged -t multinode-452000 /bin/bash -c "sudo init 0"
	W0729 11:23:35.661401   22286 cli_runner.go:211] docker exec --privileged -t multinode-452000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 11:23:35.661439   22286 oci.go:650] error shutdown multinode-452000: docker exec --privileged -t multinode-452000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:36.661830   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:36.679067   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:23:36.679116   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:36.679127   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:23:36.679159   22286 retry.go:31] will retry after 725.75017ms: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:37.405137   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:37.422650   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:23:37.422695   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:37.422706   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:23:37.422730   22286 retry.go:31] will retry after 662.701205ms: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:38.085834   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:38.103285   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:23:38.103329   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:38.103348   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:23:38.103373   22286 retry.go:31] will retry after 872.762283ms: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:38.977111   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:38.994564   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:23:38.994611   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:38.994620   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:23:38.994643   22286 retry.go:31] will retry after 1.977667602s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:40.973247   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:40.991751   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:23:40.991796   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:40.991808   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:23:40.991834   22286 retry.go:31] will retry after 1.966820227s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:42.959188   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:42.977284   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:23:42.977337   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:42.977347   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:23:42.977371   22286 retry.go:31] will retry after 2.588526862s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:45.566441   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:45.613650   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:23:45.613732   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:45.613741   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:23:45.613762   22286 retry.go:31] will retry after 8.510539596s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:54.126870   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:23:54.147229   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:23:54.147282   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:23:54.147291   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:23:54.147327   22286 oci.go:88] couldn't shut down multinode-452000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	 
	I0729 11:23:54.147428   22286 cli_runner.go:164] Run: docker rm -f -v multinode-452000
	I0729 11:23:54.165243   22286 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-452000
	W0729 11:23:54.182881   22286 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-452000 returned with exit code 1
	I0729 11:23:54.183004   22286 cli_runner.go:164] Run: docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:23:54.200743   22286 cli_runner.go:164] Run: docker network rm multinode-452000
	I0729 11:23:54.299558   22286 fix.go:124] Sleeping 1 second for extra luck!
	I0729 11:23:55.301558   22286 start.go:125] createHost starting for "" (driver="docker")
	I0729 11:23:55.324924   22286 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0729 11:23:55.325123   22286 start.go:159] libmachine.API.Create for "multinode-452000" (driver="docker")
	I0729 11:23:55.325161   22286 client.go:168] LocalClient.Create starting
	I0729 11:23:55.325343   22286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 11:23:55.325438   22286 main.go:141] libmachine: Decoding PEM data...
	I0729 11:23:55.325476   22286 main.go:141] libmachine: Parsing certificate...
	I0729 11:23:55.325573   22286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 11:23:55.325649   22286 main.go:141] libmachine: Decoding PEM data...
	I0729 11:23:55.325666   22286 main.go:141] libmachine: Parsing certificate...
	I0729 11:23:55.326641   22286 cli_runner.go:164] Run: docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 11:23:55.345313   22286 cli_runner.go:211] docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 11:23:55.345411   22286 network_create.go:284] running [docker network inspect multinode-452000] to gather additional debugging logs...
	I0729 11:23:55.345428   22286 cli_runner.go:164] Run: docker network inspect multinode-452000
	W0729 11:23:55.362573   22286 cli_runner.go:211] docker network inspect multinode-452000 returned with exit code 1
	I0729 11:23:55.362599   22286 network_create.go:287] error running [docker network inspect multinode-452000]: docker network inspect multinode-452000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-452000 not found
	I0729 11:23:55.362609   22286 network_create.go:289] output of [docker network inspect multinode-452000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-452000 not found
	
	** /stderr **
	I0729 11:23:55.362745   22286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:23:55.382033   22286 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:23:55.383691   22286 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:23:55.384132   22286 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00158b950}
	I0729 11:23:55.384153   22286 network_create.go:124] attempt to create docker network multinode-452000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0729 11:23:55.384276   22286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-452000 multinode-452000
	I0729 11:23:55.448045   22286 network_create.go:108] docker network multinode-452000 192.168.67.0/24 created
	I0729 11:23:55.448089   22286 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-452000" container
	I0729 11:23:55.448189   22286 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 11:23:55.466198   22286 cli_runner.go:164] Run: docker volume create multinode-452000 --label name.minikube.sigs.k8s.io=multinode-452000 --label created_by.minikube.sigs.k8s.io=true
	I0729 11:23:55.483487   22286 oci.go:103] Successfully created a docker volume multinode-452000
	I0729 11:23:55.483604   22286 cli_runner.go:164] Run: docker run --rm --name multinode-452000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-452000 --entrypoint /usr/bin/test -v multinode-452000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 11:23:55.722508   22286 oci.go:107] Successfully prepared a docker volume multinode-452000
	I0729 11:23:55.722556   22286 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:23:55.722577   22286 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 11:23:55.722689   22286 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-452000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 11:29:55.328191   22286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:29:55.328325   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:55.347522   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:29:55.347649   22286 retry.go:31] will retry after 177.135048ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:29:55.527196   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:55.569400   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:29:55.569549   22286 retry.go:31] will retry after 559.091316ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:29:56.130098   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:56.150545   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:29:56.150651   22286 retry.go:31] will retry after 789.922965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:29:56.941766   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:56.961832   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:29:56.961944   22286 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:29:56.961968   22286 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:29:56.962028   22286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 11:29:56.962078   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:56.980492   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:29:56.980592   22286 retry.go:31] will retry after 370.321585ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:29:57.353317   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:57.373594   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:29:57.373694   22286 retry.go:31] will retry after 296.230305ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:29:57.670329   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:57.690665   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:29:57.690767   22286 retry.go:31] will retry after 292.180294ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:29:57.983855   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:58.003709   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:29:58.003803   22286 retry.go:31] will retry after 823.842703ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:29:58.828001   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:58.847876   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:29:58.847987   22286 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:29:58.848005   22286 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:29:58.848024   22286 start.go:128] duration metric: took 6m3.544274548s to createHost
	I0729 11:29:58.848095   22286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:29:58.848146   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:58.865138   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:29:58.865242   22286 retry.go:31] will retry after 185.534265ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:29:59.053024   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:59.073330   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:29:59.073423   22286 retry.go:31] will retry after 242.724103ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:29:59.318538   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:59.338080   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:29:59.338182   22286 retry.go:31] will retry after 351.327379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:29:59.691881   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:29:59.712886   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:29:59.712983   22286 retry.go:31] will retry after 849.868089ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:00.565344   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:30:00.613658   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:30:00.613764   22286 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:30:00.613779   22286 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:00.613846   22286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 11:30:00.613904   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:30:00.631099   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:30:00.631195   22286 retry.go:31] will retry after 141.670039ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:00.773755   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:30:00.793643   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:30:00.793737   22286 retry.go:31] will retry after 502.400298ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:01.296485   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:30:01.313810   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:30:01.313899   22286 retry.go:31] will retry after 339.090501ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:01.653395   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:30:01.672483   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:30:01.672585   22286 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:30:01.672607   22286 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:01.672618   22286 fix.go:56] duration metric: took 6m26.215615647s for fixHost
	I0729 11:30:01.672624   22286 start.go:83] releasing machines lock for "multinode-452000", held for 6m26.215652955s
	W0729 11:30:01.672639   22286 start.go:714] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 11:30:01.672710   22286 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 11:30:01.672716   22286 start.go:729] Will try again in 5 seconds ...
	I0729 11:30:06.673561   22286 start.go:360] acquireMachinesLock for multinode-452000: {Name:mk5fd3750c8f47c8f1a41d32cc701d419b8c2809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:30:06.673762   22286 start.go:364] duration metric: took 165.998µs to acquireMachinesLock for "multinode-452000"
	I0729 11:30:06.673794   22286 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:30:06.673802   22286 fix.go:54] fixHost starting: 
	I0729 11:30:06.674229   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:30:06.694006   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:30:06.694048   22286 fix.go:112] recreateIfNeeded on multinode-452000: state= err=unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:06.694063   22286 fix.go:117] machineExists: false. err=machine does not exist
	I0729 11:30:06.715861   22286 out.go:177] * docker "multinode-452000" container is missing, will recreate.
	I0729 11:30:06.738579   22286 delete.go:124] DEMOLISHING multinode-452000 ...
	I0729 11:30:06.738818   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:30:06.758654   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	W0729 11:30:06.758700   22286 stop.go:83] unable to get state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:06.758721   22286 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:06.759097   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:30:06.776024   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:30:06.776074   22286 delete.go:82] Unable to get host status for multinode-452000, assuming it has already been deleted: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:06.776165   22286 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-452000
	W0729 11:30:06.793413   22286 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-452000 returned with exit code 1
	I0729 11:30:06.793448   22286 kic.go:371] could not find the container multinode-452000 to remove it. will try anyways
	I0729 11:30:06.793526   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:30:06.810495   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	W0729 11:30:06.810539   22286 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:06.810632   22286 cli_runner.go:164] Run: docker exec --privileged -t multinode-452000 /bin/bash -c "sudo init 0"
	W0729 11:30:06.827648   22286 cli_runner.go:211] docker exec --privileged -t multinode-452000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 11:30:06.827678   22286 oci.go:650] error shutdown multinode-452000: docker exec --privileged -t multinode-452000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:07.830123   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:30:07.849870   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:30:07.849916   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:07.849926   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:30:07.849947   22286 retry.go:31] will retry after 493.033256ms: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:08.343786   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:30:08.363707   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:30:08.363761   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:08.363772   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:30:08.363802   22286 retry.go:31] will retry after 399.404654ms: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:08.764560   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:30:08.784086   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:30:08.784132   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:08.784142   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:30:08.784168   22286 retry.go:31] will retry after 696.465346ms: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:09.483055   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:30:09.503169   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:30:09.503216   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:09.503226   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:30:09.503247   22286 retry.go:31] will retry after 2.29697575s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:11.802686   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:30:11.822301   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:30:11.822356   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:11.822367   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:30:11.822392   22286 retry.go:31] will retry after 1.403620637s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:13.227323   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:30:13.247105   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:30:13.247148   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:13.247158   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:30:13.247185   22286 retry.go:31] will retry after 4.687725857s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:17.935865   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:30:17.956188   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:30:17.956244   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:17.956254   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:30:17.956284   22286 retry.go:31] will retry after 4.989630335s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:22.947542   22286 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:30:22.968087   22286 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:30:22.968133   22286 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:30:22.968145   22286 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:30:22.968180   22286 oci.go:88] couldn't shut down multinode-452000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	 
	I0729 11:30:22.968265   22286 cli_runner.go:164] Run: docker rm -f -v multinode-452000
	I0729 11:30:22.986323   22286 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-452000
	W0729 11:30:23.003468   22286 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-452000 returned with exit code 1
	I0729 11:30:23.003576   22286 cli_runner.go:164] Run: docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:30:23.021181   22286 cli_runner.go:164] Run: docker network rm multinode-452000
	I0729 11:30:23.103633   22286 fix.go:124] Sleeping 1 second for extra luck!
	I0729 11:30:24.105792   22286 start.go:125] createHost starting for "" (driver="docker")
	I0729 11:30:24.129067   22286 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0729 11:30:24.129224   22286 start.go:159] libmachine.API.Create for "multinode-452000" (driver="docker")
	I0729 11:30:24.129250   22286 client.go:168] LocalClient.Create starting
	I0729 11:30:24.129462   22286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 11:30:24.129557   22286 main.go:141] libmachine: Decoding PEM data...
	I0729 11:30:24.129584   22286 main.go:141] libmachine: Parsing certificate...
	I0729 11:30:24.129673   22286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 11:30:24.129752   22286 main.go:141] libmachine: Decoding PEM data...
	I0729 11:30:24.129766   22286 main.go:141] libmachine: Parsing certificate...
	I0729 11:30:24.130715   22286 cli_runner.go:164] Run: docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 11:30:24.150044   22286 cli_runner.go:211] docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 11:30:24.150144   22286 network_create.go:284] running [docker network inspect multinode-452000] to gather additional debugging logs...
	I0729 11:30:24.150160   22286 cli_runner.go:164] Run: docker network inspect multinode-452000
	W0729 11:30:24.167704   22286 cli_runner.go:211] docker network inspect multinode-452000 returned with exit code 1
	I0729 11:30:24.167733   22286 network_create.go:287] error running [docker network inspect multinode-452000]: docker network inspect multinode-452000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-452000 not found
	I0729 11:30:24.167750   22286 network_create.go:289] output of [docker network inspect multinode-452000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-452000 not found
	
	** /stderr **
	I0729 11:30:24.167878   22286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:30:24.187771   22286 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:30:24.189479   22286 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:30:24.191363   22286 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:30:24.191829   22286 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00146eac0}
	I0729 11:30:24.191847   22286 network_create.go:124] attempt to create docker network multinode-452000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0729 11:30:24.191941   22286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-452000 multinode-452000
	W0729 11:30:24.209905   22286 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-452000 multinode-452000 returned with exit code 1
	W0729 11:30:24.209941   22286 network_create.go:149] failed to create docker network multinode-452000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-452000 multinode-452000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0729 11:30:24.209959   22286 network_create.go:116] failed to create docker network multinode-452000 192.168.76.0/24, will retry: subnet is taken
	I0729 11:30:24.211330   22286 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:30:24.211703   22286 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00152b930}
	I0729 11:30:24.211716   22286 network_create.go:124] attempt to create docker network multinode-452000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0729 11:30:24.211788   22286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-452000 multinode-452000
	I0729 11:30:24.291828   22286 network_create.go:108] docker network multinode-452000 192.168.85.0/24 created
	I0729 11:30:24.291858   22286 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-452000" container
	I0729 11:30:24.291973   22286 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 11:30:24.310008   22286 cli_runner.go:164] Run: docker volume create multinode-452000 --label name.minikube.sigs.k8s.io=multinode-452000 --label created_by.minikube.sigs.k8s.io=true
	I0729 11:30:24.327584   22286 oci.go:103] Successfully created a docker volume multinode-452000
	I0729 11:30:24.327707   22286 cli_runner.go:164] Run: docker run --rm --name multinode-452000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-452000 --entrypoint /usr/bin/test -v multinode-452000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 11:30:24.595143   22286 oci.go:107] Successfully prepared a docker volume multinode-452000
	I0729 11:30:24.595186   22286 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:30:24.595201   22286 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 11:30:24.595293   22286 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-452000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 11:36:24.133109   22286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:36:24.133232   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:24.152926   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:24.153040   22286 retry.go:31] will retry after 165.438341ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:24.320857   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:24.340183   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:24.340292   22286 retry.go:31] will retry after 208.1486ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:24.549811   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:24.569882   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:24.569997   22286 retry.go:31] will retry after 551.547768ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:25.121975   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:25.142625   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:25.142736   22286 retry.go:31] will retry after 511.908368ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:25.657117   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:25.676925   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:36:25.677032   22286 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:36:25.677051   22286 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:25.677123   22286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 11:36:25.677178   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:25.694326   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:25.694423   22286 retry.go:31] will retry after 255.653616ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:25.950977   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:25.971847   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:25.971953   22286 retry.go:31] will retry after 277.319366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:26.250544   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:26.270850   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:26.270950   22286 retry.go:31] will retry after 306.960581ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:26.579859   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:26.599444   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:26.599544   22286 retry.go:31] will retry after 754.332629ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:27.354828   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:27.374602   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:36:27.374717   22286 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:36:27.374737   22286 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:27.374745   22286 start.go:128] duration metric: took 6m3.266826102s to createHost
	I0729 11:36:27.374819   22286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:36:27.374873   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:27.391698   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:27.391790   22286 retry.go:31] will retry after 240.443896ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:27.632585   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:27.651443   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:27.651538   22286 retry.go:31] will retry after 453.486746ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:28.107397   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:28.127540   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:28.127635   22286 retry.go:31] will retry after 570.096044ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:28.699396   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:28.719336   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:36:28.719437   22286 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:36:28.719452   22286 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:28.719520   22286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 11:36:28.719573   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:28.737980   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:28.738076   22286 retry.go:31] will retry after 279.614813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:29.020154   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:29.041063   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:29.041160   22286 retry.go:31] will retry after 417.305993ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:29.458759   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:29.477645   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	I0729 11:36:29.477753   22286 retry.go:31] will retry after 341.64056ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:29.819805   22286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000
	W0729 11:36:29.840957   22286 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000 returned with exit code 1
	W0729 11:36:29.841059   22286 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	W0729 11:36:29.841077   22286 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-452000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-452000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:29.841087   22286 fix.go:56] duration metric: took 6m23.165091971s for fixHost
	I0729 11:36:29.841094   22286 start.go:83] releasing machines lock for "multinode-452000", held for 6m23.16512671s
	W0729 11:36:29.841171   22286 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-452000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-452000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 11:36:29.883821   22286 out.go:177] 
	W0729 11:36:29.906536   22286 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 11:36:29.906585   22286 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0729 11:36:29.906632   22286 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0729 11:36:29.926727   22286 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-452000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-452000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "0120d0a18f0f61df82732b8f5121b5d09aa762dd7dcf659e4dc84c2eff12e661",
	        "Created": "2024-07-29T18:30:24.244444038Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (74.113694ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:36:30.155759   22764 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (790.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 node delete m03: exit status 80 (160.806352ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-452000 host status: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-452000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status --alsologtostderr: exit status 7 (74.907963ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:36:30.372580   22770 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:36:30.372761   22770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:36:30.372766   22770 out.go:304] Setting ErrFile to fd 2...
	I0729 11:36:30.372770   22770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:36:30.372942   22770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:36:30.373145   22770 out.go:298] Setting JSON to false
	I0729 11:36:30.373166   22770 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:36:30.373203   22770 notify.go:220] Checking for updates...
	I0729 11:36:30.373431   22770 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:36:30.373446   22770 status.go:255] checking status of multinode-452000 ...
	I0729 11:36:30.373837   22770 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:36:30.391663   22770 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:36:30.391719   22770 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:36:30.391738   22770 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:36:30.391763   22770 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:36:30.391771   22770 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-452000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "0120d0a18f0f61df82732b8f5121b5d09aa762dd7dcf659e4dc84c2eff12e661",
	        "Created": "2024-07-29T18:30:24.244444038Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (73.67762ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:36:30.486418   22774 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (15.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 stop: exit status 82 (15.028648774s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-452000"  ...
	* Stopping node "multinode-452000"  ...
	* Stopping node "multinode-452000"  ...
	* Stopping node "multinode-452000"  ...
	* Stopping node "multinode-452000"  ...
	* Stopping node "multinode-452000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-452000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-452000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status: exit status 7 (75.78297ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:36:45.591270   22788 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:36:45.591290   22788 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-452000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-452000 status --alsologtostderr: exit status 7 (75.226312ms)

                                                
                                                
-- stdout --
	multinode-452000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:36:45.647773   22791 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:36:45.647958   22791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:36:45.647964   22791 out.go:304] Setting ErrFile to fd 2...
	I0729 11:36:45.647967   22791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:36:45.648145   22791 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:36:45.648325   22791 out.go:298] Setting JSON to false
	I0729 11:36:45.648351   22791 mustload.go:65] Loading cluster: multinode-452000
	I0729 11:36:45.648385   22791 notify.go:220] Checking for updates...
	I0729 11:36:45.648633   22791 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:36:45.648649   22791 status.go:255] checking status of multinode-452000 ...
	I0729 11:36:45.649022   22791 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:36:45.666523   22791 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:36:45.666595   22791 status.go:330] multinode-452000 host status = "" (err=state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	)
	I0729 11:36:45.666614   22791 status.go:257] multinode-452000 status: &{Name:multinode-452000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 11:36:45.666640   22791 status.go:260] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	E0729 11:36:45.666648   22791 status.go:263] The "multinode-452000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-452000 status --alsologtostderr": multinode-452000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-452000 status --alsologtostderr": multinode-452000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "0120d0a18f0f61df82732b8f5121b5d09aa762dd7dcf659e4dc84c2eff12e661",
	        "Created": "2024-07-29T18:30:24.244444038Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (74.088942ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:36:45.761836   22795 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (15.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (95.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-452000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-452000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m35.510003985s)

                                                
                                                
-- stdout --
	* [multinode-452000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-452000" primary control-plane node in "multinode-452000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* docker "multinode-452000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:36:45.817309   22798 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:36:45.817574   22798 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:36:45.817580   22798 out.go:304] Setting ErrFile to fd 2...
	I0729 11:36:45.817584   22798 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:36:45.817753   22798 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 11:36:45.819074   22798 out.go:298] Setting JSON to false
	I0729 11:36:45.841505   22798 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":9375,"bootTime":1722268830,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 11:36:45.841609   22798 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 11:36:45.863909   22798 out.go:177] * [multinode-452000] minikube v1.33.1 on Darwin 14.5
	I0729 11:36:45.906527   22798 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 11:36:45.906608   22798 notify.go:220] Checking for updates...
	I0729 11:36:45.950068   22798 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	I0729 11:36:45.971255   22798 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 11:36:45.992392   22798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:36:46.013274   22798 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	I0729 11:36:46.034486   22798 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:36:46.057117   22798 config.go:182] Loaded profile config "multinode-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 11:36:46.057864   22798 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:36:46.082221   22798 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 11:36:46.082396   22798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:36:46.163793   22798 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:122 SystemTime:2024-07-29 18:36:46.154647911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 11:36:46.206576   22798 out.go:177] * Using the docker driver based on existing profile
	I0729 11:36:46.227539   22798 start.go:297] selected driver: docker
	I0729 11:36:46.227585   22798 start.go:901] validating driver "docker" against &{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-452000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:36:46.227702   22798 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:36:46.227931   22798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:36:46.308598   22798 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:77 OomKillDisable:false NGoroutines:122 SystemTime:2024-07-29 18:36:46.299957982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 11:36:46.311710   22798 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:36:46.311751   22798 cni.go:84] Creating CNI manager for ""
	I0729 11:36:46.311758   22798 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 11:36:46.311837   22798 start.go:340] cluster config:
	{Name:multinode-452000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-452000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:36:46.354432   22798 out.go:177] * Starting "multinode-452000" primary control-plane node in "multinode-452000" cluster
	I0729 11:36:46.377491   22798 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 11:36:46.399224   22798 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 11:36:46.441547   22798 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:36:46.441599   22798 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 11:36:46.441628   22798 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 11:36:46.441648   22798 cache.go:56] Caching tarball of preloaded images
	I0729 11:36:46.441867   22798 preload.go:172] Found /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 11:36:46.441886   22798 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 11:36:46.442754   22798 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/multinode-452000/config.json ...
	W0729 11:36:46.467389   22798 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 11:36:46.467403   22798 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 11:36:46.467548   22798 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 11:36:46.467568   22798 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 11:36:46.467575   22798 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 11:36:46.467584   22798 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 11:36:46.467590   22798 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 11:36:46.470909   22798 image.go:273] response: 
	I0729 11:36:46.618521   22798 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 11:36:46.618582   22798 cache.go:194] Successfully downloaded all kic artifacts
	I0729 11:36:46.618626   22798 start.go:360] acquireMachinesLock for multinode-452000: {Name:mk5fd3750c8f47c8f1a41d32cc701d419b8c2809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:36:46.618760   22798 start.go:364] duration metric: took 112.383µs to acquireMachinesLock for "multinode-452000"
	I0729 11:36:46.618793   22798 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:36:46.618803   22798 fix.go:54] fixHost starting: 
	I0729 11:36:46.619038   22798 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:36:46.636397   22798 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:36:46.636458   22798 fix.go:112] recreateIfNeeded on multinode-452000: state= err=unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:46.636479   22798 fix.go:117] machineExists: false. err=machine does not exist
	I0729 11:36:46.679095   22798 out.go:177] * docker "multinode-452000" container is missing, will recreate.
	I0729 11:36:46.700122   22798 delete.go:124] DEMOLISHING multinode-452000 ...
	I0729 11:36:46.700216   22798 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:36:46.717503   22798 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	W0729 11:36:46.717554   22798 stop.go:83] unable to get state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:46.717567   22798 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:46.717961   22798 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:36:46.734897   22798 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:36:46.734944   22798 delete.go:82] Unable to get host status for multinode-452000, assuming it has already been deleted: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:46.735040   22798 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-452000
	W0729 11:36:46.751966   22798 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-452000 returned with exit code 1
	I0729 11:36:46.751998   22798 kic.go:371] could not find the container multinode-452000 to remove it. will try anyways
	I0729 11:36:46.752084   22798 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:36:46.769451   22798 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	W0729 11:36:46.769507   22798 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:46.769593   22798 cli_runner.go:164] Run: docker exec --privileged -t multinode-452000 /bin/bash -c "sudo init 0"
	W0729 11:36:46.786973   22798 cli_runner.go:211] docker exec --privileged -t multinode-452000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 11:36:46.787002   22798 oci.go:650] error shutdown multinode-452000: docker exec --privileged -t multinode-452000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:47.787435   22798 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:36:47.804759   22798 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:36:47.804806   22798 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:47.804814   22798 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:36:47.804851   22798 retry.go:31] will retry after 528.909975ms: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:48.334036   22798 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:36:48.351385   22798 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:36:48.351431   22798 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:48.351440   22798 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:36:48.351463   22798 retry.go:31] will retry after 1.101176354s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:49.452995   22798 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:36:49.469910   22798 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:36:49.469954   22798 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:49.469963   22798 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:36:49.469989   22798 retry.go:31] will retry after 1.327929583s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:50.798336   22798 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:36:50.815725   22798 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:36:50.815769   22798 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:50.815777   22798 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:36:50.815823   22798 retry.go:31] will retry after 1.28087075s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:52.097085   22798 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:36:52.114517   22798 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:36:52.114563   22798 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:52.114573   22798 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:36:52.114602   22798 retry.go:31] will retry after 2.970178931s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:55.085555   22798 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:36:55.105062   22798 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:36:55.105105   22798 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:36:55.105117   22798 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:36:55.105144   22798 retry.go:31] will retry after 4.916894386s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:37:00.022389   22798 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:37:00.041916   22798 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:37:00.041959   22798 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:37:00.041968   22798 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:37:00.041994   22798 retry.go:31] will retry after 5.799925572s: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:37:05.842961   22798 cli_runner.go:164] Run: docker container inspect multinode-452000 --format={{.State.Status}}
	W0729 11:37:05.862864   22798 cli_runner.go:211] docker container inspect multinode-452000 --format={{.State.Status}} returned with exit code 1
	I0729 11:37:05.862907   22798 oci.go:662] temporary error verifying shutdown: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	I0729 11:37:05.862919   22798 oci.go:664] temporary error: container multinode-452000 status is  but expect it to be exited
	I0729 11:37:05.862968   22798 oci.go:88] couldn't shut down multinode-452000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000
	 
	I0729 11:37:05.863049   22798 cli_runner.go:164] Run: docker rm -f -v multinode-452000
	I0729 11:37:05.881393   22798 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-452000
	W0729 11:37:05.898416   22798 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-452000 returned with exit code 1
	I0729 11:37:05.898521   22798 cli_runner.go:164] Run: docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:37:05.916166   22798 cli_runner.go:164] Run: docker network rm multinode-452000
	I0729 11:37:05.999140   22798 fix.go:124] Sleeping 1 second for extra luck!
	I0729 11:37:07.001367   22798 start.go:125] createHost starting for "" (driver="docker")
	I0729 11:37:07.023427   22798 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0729 11:37:07.023540   22798 start.go:159] libmachine.API.Create for "multinode-452000" (driver="docker")
	I0729 11:37:07.023565   22798 client.go:168] LocalClient.Create starting
	I0729 11:37:07.023738   22798 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/ca.pem
	I0729 11:37:07.023814   22798 main.go:141] libmachine: Decoding PEM data...
	I0729 11:37:07.023832   22798 main.go:141] libmachine: Parsing certificate...
	I0729 11:37:07.023898   22798 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19338-16127/.minikube/certs/cert.pem
	I0729 11:37:07.023940   22798 main.go:141] libmachine: Decoding PEM data...
	I0729 11:37:07.023980   22798 main.go:141] libmachine: Parsing certificate...
	I0729 11:37:07.024364   22798 cli_runner.go:164] Run: docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 11:37:07.042645   22798 cli_runner.go:211] docker network inspect multinode-452000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 11:37:07.042739   22798 network_create.go:284] running [docker network inspect multinode-452000] to gather additional debugging logs...
	I0729 11:37:07.042755   22798 cli_runner.go:164] Run: docker network inspect multinode-452000
	W0729 11:37:07.062389   22798 cli_runner.go:211] docker network inspect multinode-452000 returned with exit code 1
	I0729 11:37:07.062414   22798 network_create.go:287] error running [docker network inspect multinode-452000]: docker network inspect multinode-452000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-452000 not found
	I0729 11:37:07.062427   22798 network_create.go:289] output of [docker network inspect multinode-452000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-452000 not found
	
	** /stderr **
	I0729 11:37:07.062549   22798 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:37:07.082210   22798 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:37:07.083618   22798 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 11:37:07.083965   22798 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001593250}
	I0729 11:37:07.083984   22798 network_create.go:124] attempt to create docker network multinode-452000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0729 11:37:07.084052   22798 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-452000 multinode-452000
	I0729 11:37:07.147614   22798 network_create.go:108] docker network multinode-452000 192.168.67.0/24 created
	I0729 11:37:07.147652   22798 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-452000" container
	I0729 11:37:07.147779   22798 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 11:37:07.165996   22798 cli_runner.go:164] Run: docker volume create multinode-452000 --label name.minikube.sigs.k8s.io=multinode-452000 --label created_by.minikube.sigs.k8s.io=true
	I0729 11:37:07.183267   22798 oci.go:103] Successfully created a docker volume multinode-452000
	I0729 11:37:07.183381   22798 cli_runner.go:164] Run: docker run --rm --name multinode-452000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-452000 --entrypoint /usr/bin/test -v multinode-452000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 11:37:07.431367   22798 oci.go:107] Successfully prepared a docker volume multinode-452000
	I0729 11:37:07.431408   22798 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 11:37:07.431428   22798 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 11:37:07.431561   22798 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-452000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-452000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-452000
helpers_test.go:235: (dbg) docker inspect multinode-452000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-452000",
	        "Id": "31a0854446eeda6da6447dd2ef01f6f99bf6dc87b817432f6017184280300477",
	        "Created": "2024-07-29T18:37:07.099242249Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-452000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-452000 -n multinode-452000: exit status 7 (74.193557ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:38:21.372912   22864 status.go:249] status error: host: state: unknown state "multinode-452000": docker container inspect multinode-452000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-452000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-452000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (95.61s)

                                                
                                    
x
+
TestScheduledStopUnix (300.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-651000 --memory=2048 --driver=docker 
E0729 11:40:30.918716   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 11:41:18.929360   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 11:44:55.951814   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-651000 --memory=2048 --driver=docker : signal: killed (5m0.003632922s)

                                                
                                                
-- stdout --
	* [scheduled-stop-651000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-651000" primary control-plane node in "scheduled-stop-651000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-651000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-651000" primary control-plane node in "scheduled-stop-651000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-29 11:45:23.319819 -0700 PDT m=+4788.508094356
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-651000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-651000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-651000",
	        "Id": "bc954832787de67cd448f4c6559b832c1bcb7f3797c45e389cfe5f2a9f75a9f9",
	        "Created": "2024-07-29T18:40:24.1956079Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-651000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-651000 -n scheduled-stop-651000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-651000 -n scheduled-stop-651000: exit status 7 (75.887212ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:45:23.416088   23209 status.go:249] status error: host: state: unknown state "scheduled-stop-651000": docker container inspect scheduled-stop-651000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-651000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-651000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-651000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-651000
--- FAIL: TestScheduledStopUnix (300.54s)

                                                
                                    
x
+
TestSkaffold (300.54s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe262694017 version
skaffold_test.go:59: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe262694017 version: (1.720887616s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-110000 --memory=2600 --driver=docker 
E0729 11:45:30.995909   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 11:46:54.049805   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 11:49:55.950861   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-110000 --memory=2600 --driver=docker : signal: killed (4m57.062907362s)

                                                
                                                
-- stdout --
	* [skaffold-110000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-110000" primary control-plane node in "skaffold-110000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-110000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-110000" primary control-plane node in "skaffold-110000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-07-29 11:50:23.861468 -0700 PDT m=+5089.049977522
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-110000
helpers_test.go:235: (dbg) docker inspect skaffold-110000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-110000",
	        "Id": "e7d4bc6035e7450ac494214f2ab4b66f6d017ced49966526a149046e05b5b543",
	        "Created": "2024-07-29T18:45:27.747273721Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-110000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-110000 -n skaffold-110000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-110000 -n skaffold-110000: exit status 7 (74.578038ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:50:23.955573   23300 status.go:249] status error: host: state: unknown state "skaffold-110000": docker container inspect skaffold-110000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-110000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-110000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-110000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-110000
--- FAIL: TestSkaffold (300.54s)

                                                
                                    
x
+
TestInsufficientStorage (300.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-014000 --memory=2048 --output=json --wait=true --driver=docker 
E0729 11:50:30.995745   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 11:54:55.952247   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-014000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.003516718s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bd078a34-3772-43ef-968a-eea0c1c0b179","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-014000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f1b0c5a-a238-49b5-a76d-3c54911a32d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19338"}}
	{"specversion":"1.0","id":"ff82cb61-3ed3-4994-8700-70aff53e9789","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig"}}
	{"specversion":"1.0","id":"bca7a002-f89a-47db-a716-7ba3ef8378e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"90af88c0-2ee7-4048-a01f-39dfcb606110","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"876fa0dd-eda9-43d1-b4f4-55e8ca9d39c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube"}}
	{"specversion":"1.0","id":"d59dd7c7-28af-4e57-928c-f149641fa426","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5389b14a-84f5-40f6-ab07-f7344a5108cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"985e1e34-1999-4f17-bf95-a31c6e2429e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c57b0839-6352-4bcf-bcf9-8b39322397d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a1f6cd34-5d33-456f-9f0a-e3ac2492b18c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"001c0144-2eff-48cf-9c5f-3f652a7360e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-014000\" primary control-plane node in \"insufficient-storage-014000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d179d12-1397-4a67-8fe2-2d40fa197d66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721902582-19326 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"736f7ad4-2d34-4e99-ad9f-fa38bb6ed0a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-014000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-014000 --output=json --layout=cluster: context deadline exceeded (1.118µs)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-014000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-014000
--- FAIL: TestInsufficientStorage (300.45s)

                                                
                                    

Test pass (171/212)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.78
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.34
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.30.3/json-events 10.11
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.3
18 TestDownloadOnly/v1.30.3/DeleteAll 0.35
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-beta.0/json-events 12.2
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.35
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.21
29 TestDownloadOnlyKic 1.52
30 TestBinaryMirror 1.31
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.15
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.17
36 TestAddons/Setup 221.8
38 TestAddons/serial/Volcano 40.4
40 TestAddons/serial/GCPAuth/Namespaces 0.1
44 TestAddons/parallel/InspektorGadget 10.66
45 TestAddons/parallel/MetricsServer 5.63
46 TestAddons/parallel/HelmTiller 10.76
48 TestAddons/parallel/CSI 62.19
49 TestAddons/parallel/Headlamp 12.35
50 TestAddons/parallel/CloudSpanner 5.53
51 TestAddons/parallel/LocalPath 42.63
52 TestAddons/parallel/NvidiaDevicePlugin 5.48
53 TestAddons/parallel/Yakd 11.63
54 TestAddons/StoppedEnableDisable 11.42
62 TestHyperKitDriverInstallOrUpdate 7.73
65 TestErrorSpam/setup 21.11
66 TestErrorSpam/start 1.84
67 TestErrorSpam/status 0.8
68 TestErrorSpam/pause 1.39
69 TestErrorSpam/unpause 1.42
70 TestErrorSpam/stop 2.41
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 37.3
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 29.78
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.37
82 TestFunctional/serial/CacheCmd/cache/add_local 1.41
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.47
87 TestFunctional/serial/CacheCmd/cache/delete 0.16
88 TestFunctional/serial/MinikubeKubectlCmd 1.16
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.47
90 TestFunctional/serial/ExtraConfig 43.39
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 2.92
93 TestFunctional/serial/LogsFileCmd 2.93
94 TestFunctional/serial/InvalidService 4.16
96 TestFunctional/parallel/ConfigCmd 0.5
97 TestFunctional/parallel/DashboardCmd 17.53
98 TestFunctional/parallel/DryRun 1.75
99 TestFunctional/parallel/InternationalLanguage 0.74
100 TestFunctional/parallel/StatusCmd 0.79
105 TestFunctional/parallel/AddonsCmd 0.23
106 TestFunctional/parallel/PersistentVolumeClaim 28.25
108 TestFunctional/parallel/SSHCmd 0.52
109 TestFunctional/parallel/CpCmd 1.82
110 TestFunctional/parallel/MySQL 27.32
111 TestFunctional/parallel/FileSync 0.34
112 TestFunctional/parallel/CertSync 1.84
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
120 TestFunctional/parallel/License 0.54
121 TestFunctional/parallel/Version/short 0.11
122 TestFunctional/parallel/Version/components 0.75
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.63
128 TestFunctional/parallel/ImageCommands/Setup 1.87
129 TestFunctional/parallel/DockerEnv/bash 1.08
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.12
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.57
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.85
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
140 TestFunctional/parallel/ServiceCmd/DeployApp 23.22
141 TestFunctional/parallel/ServiceCmd/List 0.3
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
143 TestFunctional/parallel/ServiceCmd/HTTPS 15
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.15
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
155 TestFunctional/parallel/ServiceCmd/Format 15
156 TestFunctional/parallel/ServiceCmd/URL 15.01
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
158 TestFunctional/parallel/ProfileCmd/profile_list 0.37
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
160 TestFunctional/parallel/MountCmd/any-port 7.29
161 TestFunctional/parallel/MountCmd/specific-port 1.86
162 TestFunctional/parallel/MountCmd/VerifyCleanup 2.11
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 106.13
170 TestMultiControlPlane/serial/DeployApp 5.48
171 TestMultiControlPlane/serial/PingHostFromPods 1.35
172 TestMultiControlPlane/serial/AddWorkerNode 20.66
173 TestMultiControlPlane/serial/NodeLabels 0.06
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.69
175 TestMultiControlPlane/serial/CopyFile 16.09
176 TestMultiControlPlane/serial/StopSecondaryNode 11.35
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.49
178 TestMultiControlPlane/serial/RestartSecondaryNode 20.45
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.05
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 317.27
181 TestMultiControlPlane/serial/DeleteSecondaryNode 10.31
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.48
183 TestMultiControlPlane/serial/StopCluster 32.69
184 TestMultiControlPlane/serial/RestartCluster 82.2
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.49
186 TestMultiControlPlane/serial/AddSecondaryNode 30.82
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
190 TestImageBuild/serial/Setup 20.55
191 TestImageBuild/serial/NormalBuild 1.79
192 TestImageBuild/serial/BuildWithBuildArg 0.86
193 TestImageBuild/serial/BuildWithDockerIgnore 0.66
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.78
198 TestJSONOutput/start/Command 35.91
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.45
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.52
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 10.75
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.64
223 TestKicCustomNetwork/create_custom_network 22.84
224 TestKicCustomNetwork/use_default_bridge_network 22.4
225 TestKicExistingNetwork 22.06
226 TestKicCustomSubnet 22.27
227 TestKicStaticIP 22.17
228 TestMainNoArgs 0.08
229 TestMinikubeProfile 49.38
232 TestMountStart/serial/StartWithMountFirst 7.02
233 TestMountStart/serial/VerifyMountFirst 0.25
234 TestMountStart/serial/StartWithMountSecond 7.38
254 TestPreload 121.33
275 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 13.04
276 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 13.01
x
+
TestDownloadOnly/v1.20.0/json-events (10.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-031000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-031000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (10.776609807s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-031000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-031000: exit status 85 (294.785818ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-031000 | jenkins | v1.33.1 | 29 Jul 24 10:25 PDT |          |
	|         | -p download-only-031000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:25:34
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:25:34.696058   16667 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:25:34.696325   16667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:25:34.696330   16667 out.go:304] Setting ErrFile to fd 2...
	I0729 10:25:34.696334   16667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:25:34.696520   16667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	W0729 10:25:34.696617   16667 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19338-16127/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19338-16127/.minikube/config/config.json: no such file or directory
	I0729 10:25:34.698406   16667 out.go:298] Setting JSON to true
	I0729 10:25:34.721029   16667 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5104,"bootTime":1722268830,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 10:25:34.721131   16667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:25:34.741566   16667 out.go:97] [download-only-031000] minikube v1.33.1 on Darwin 14.5
	W0729 10:25:34.741790   16667 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 10:25:34.741818   16667 notify.go:220] Checking for updates...
	I0729 10:25:34.764699   16667 out.go:169] MINIKUBE_LOCATION=19338
	I0729 10:25:34.786698   16667 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	I0729 10:25:34.808623   16667 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 10:25:34.829714   16667 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:25:34.851792   16667 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	W0729 10:25:34.894681   16667 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:25:34.895221   16667 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:25:34.920779   16667 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 10:25:34.921033   16667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:25:35.002014   16667 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:69 SystemTime:2024-07-29 17:25:34.993051632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 10:25:35.023901   16667 out.go:97] Using the docker driver based on user configuration
	I0729 10:25:35.023953   16667 start.go:297] selected driver: docker
	I0729 10:25:35.023968   16667 start.go:901] validating driver "docker" against <nil>
	I0729 10:25:35.024202   16667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:25:35.106744   16667 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:69 SystemTime:2024-07-29 17:25:35.097737024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 10:25:35.106917   16667 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:25:35.109913   16667 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0729 10:25:35.110065   16667 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:25:35.131560   16667 out.go:169] Using Docker Desktop driver with root privileges
	I0729 10:25:35.152575   16667 cni.go:84] Creating CNI manager for ""
	I0729 10:25:35.152617   16667 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 10:25:35.152744   16667 start.go:340] cluster config:
	{Name:download-only-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:25:35.174662   16667 out.go:97] Starting "download-only-031000" primary control-plane node in "download-only-031000" cluster
	I0729 10:25:35.174721   16667 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 10:25:35.196469   16667 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0729 10:25:35.196543   16667 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:25:35.196673   16667 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 10:25:35.215141   16667 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 10:25:35.215397   16667 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 10:25:35.215544   16667 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 10:25:35.257734   16667 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0729 10:25:35.257786   16667 cache.go:56] Caching tarball of preloaded images
	I0729 10:25:35.258183   16667 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:25:35.280132   16667 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 10:25:35.280184   16667 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 10:25:35.367567   16667 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0729 10:25:40.619788   16667 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 10:25:40.620029   16667 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 10:25:41.178166   16667 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 10:25:41.178391   16667 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/download-only-031000/config.json ...
	I0729 10:25:41.178417   16667 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/download-only-031000/config.json: {Name:mk3a27c14198d86ec485d555bf6eb4a541e845f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:25:41.178727   16667 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 10:25:41.179311   16667 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	I0729 10:25:42.518517   16667 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	
	
	* The control-plane node download-only-031000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-031000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-031000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (10.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-728000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-728000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker : (10.112003465s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (10.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-728000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-728000: exit status 85 (296.589116ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-031000 | jenkins | v1.33.1 | 29 Jul 24 10:25 PDT |                     |
	|         | -p download-only-031000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:25 PDT | 29 Jul 24 10:25 PDT |
	| delete  | -p download-only-031000        | download-only-031000 | jenkins | v1.33.1 | 29 Jul 24 10:25 PDT | 29 Jul 24 10:25 PDT |
	| start   | -o=json --download-only        | download-only-728000 | jenkins | v1.33.1 | 29 Jul 24 10:25 PDT |                     |
	|         | -p download-only-728000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:25:46
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:25:46.323139   16716 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:25:46.323402   16716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:25:46.323408   16716 out.go:304] Setting ErrFile to fd 2...
	I0729 10:25:46.323411   16716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:25:46.323589   16716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 10:25:46.325050   16716 out.go:298] Setting JSON to true
	I0729 10:25:46.347727   16716 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5116,"bootTime":1722268830,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 10:25:46.347837   16716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:25:46.369793   16716 out.go:97] [download-only-728000] minikube v1.33.1 on Darwin 14.5
	I0729 10:25:46.370011   16716 notify.go:220] Checking for updates...
	I0729 10:25:46.391449   16716 out.go:169] MINIKUBE_LOCATION=19338
	I0729 10:25:46.413477   16716 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	I0729 10:25:46.434527   16716 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 10:25:46.456357   16716 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:25:46.482547   16716 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	W0729 10:25:46.525075   16716 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:25:46.525565   16716 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:25:46.549674   16716 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 10:25:46.549829   16716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:25:46.631503   16716 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:69 SystemTime:2024-07-29 17:25:46.622860291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 10:25:46.653458   16716 out.go:97] Using the docker driver based on user configuration
	I0729 10:25:46.653504   16716 start.go:297] selected driver: docker
	I0729 10:25:46.653519   16716 start.go:901] validating driver "docker" against <nil>
	I0729 10:25:46.653747   16716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:25:46.733940   16716 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:69 SystemTime:2024-07-29 17:25:46.725737948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 10:25:46.734127   16716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:25:46.736955   16716 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0729 10:25:46.737094   16716 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:25:46.758824   16716 out.go:169] Using Docker Desktop driver with root privileges
	I0729 10:25:46.780210   16716 cni.go:84] Creating CNI manager for ""
	I0729 10:25:46.780285   16716 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:25:46.780303   16716 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:25:46.780449   16716 start.go:340] cluster config:
	{Name:download-only-728000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-728000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:25:46.801473   16716 out.go:97] Starting "download-only-728000" primary control-plane node in "download-only-728000" cluster
	I0729 10:25:46.801514   16716 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 10:25:46.822535   16716 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0729 10:25:46.822603   16716 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:25:46.822674   16716 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 10:25:46.841283   16716 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 10:25:46.841508   16716 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 10:25:46.841528   16716 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 10:25:46.841534   16716 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 10:25:46.841541   16716 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 10:25:46.879184   16716 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 10:25:46.879229   16716 cache.go:56] Caching tarball of preloaded images
	I0729 10:25:46.879589   16716 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:25:46.901038   16716 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 10:25:46.901077   16716 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0729 10:25:46.984680   16716 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 10:25:51.663896   16716 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0729 10:25:51.664116   16716 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0729 10:25:52.153771   16716 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 10:25:52.154061   16716 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/download-only-728000/config.json ...
	I0729 10:25:52.154086   16716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/download-only-728000/config.json: {Name:mk09a81f4cd0cc384b864b79022213ae91d860dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:25:52.154439   16716 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 10:25:52.154702   16716 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/darwin/amd64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-728000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-728000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-728000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (12.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-279000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-279000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker : (12.201740831s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (12.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-279000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-279000: exit status 85 (293.829236ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-031000 | jenkins | v1.33.1 | 29 Jul 24 10:25 PDT |                     |
	|         | -p download-only-031000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:25 PDT | 29 Jul 24 10:25 PDT |
	| delete  | -p download-only-031000             | download-only-031000 | jenkins | v1.33.1 | 29 Jul 24 10:25 PDT | 29 Jul 24 10:25 PDT |
	| start   | -o=json --download-only             | download-only-728000 | jenkins | v1.33.1 | 29 Jul 24 10:25 PDT |                     |
	|         | -p download-only-728000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:25 PDT | 29 Jul 24 10:25 PDT |
	| delete  | -p download-only-728000             | download-only-728000 | jenkins | v1.33.1 | 29 Jul 24 10:25 PDT | 29 Jul 24 10:25 PDT |
	| start   | -o=json --download-only             | download-only-279000 | jenkins | v1.33.1 | 29 Jul 24 10:25 PDT |                     |
	|         | -p download-only-279000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:25:57
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:25:57.291927   16767 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:25:57.292175   16767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:25:57.292180   16767 out.go:304] Setting ErrFile to fd 2...
	I0729 10:25:57.292184   16767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:25:57.292355   16767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 10:25:57.293824   16767 out.go:298] Setting JSON to true
	I0729 10:25:57.316041   16767 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5127,"bootTime":1722268830,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 10:25:57.316131   16767 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:25:57.338090   16767 out.go:97] [download-only-279000] minikube v1.33.1 on Darwin 14.5
	I0729 10:25:57.338281   16767 notify.go:220] Checking for updates...
	I0729 10:25:57.360072   16767 out.go:169] MINIKUBE_LOCATION=19338
	I0729 10:25:57.380898   16767 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	I0729 10:25:57.402024   16767 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 10:25:57.423146   16767 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:25:57.445180   16767 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	W0729 10:25:57.486829   16767 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:25:57.487300   16767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:25:57.511079   16767 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 10:25:57.511230   16767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:25:57.590359   16767 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:69 SystemTime:2024-07-29 17:25:57.581584934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 10:25:57.611494   16767 out.go:97] Using the docker driver based on user configuration
	I0729 10:25:57.611516   16767 start.go:297] selected driver: docker
	I0729 10:25:57.611526   16767 start.go:901] validating driver "docker" against <nil>
	I0729 10:25:57.611682   16767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:25:57.695877   16767 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:69 SystemTime:2024-07-29 17:25:57.68733286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:htt
ps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-
g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-des
ktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugi
ns/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 10:25:57.696072   16767 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:25:57.698931   16767 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0729 10:25:57.699087   16767 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:25:57.720994   16767 out.go:169] Using Docker Desktop driver with root privileges
	I0729 10:25:57.742965   16767 cni.go:84] Creating CNI manager for ""
	I0729 10:25:57.743041   16767 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 10:25:57.743054   16767 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 10:25:57.743208   16767 start.go:340] cluster config:
	{Name:download-only-279000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-279000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:25:57.764918   16767 out.go:97] Starting "download-only-279000" primary control-plane node in "download-only-279000" cluster
	I0729 10:25:57.764959   16767 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 10:25:57.785730   16767 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0729 10:25:57.785800   16767 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 10:25:57.785905   16767 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 10:25:57.804215   16767 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 10:25:57.804405   16767 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 10:25:57.804441   16767 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 10:25:57.804448   16767 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 10:25:57.804455   16767 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 10:25:57.836172   16767 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0729 10:25:57.836187   16767 cache.go:56] Caching tarball of preloaded images
	I0729 10:25:57.836722   16767 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 10:25:57.858014   16767 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 10:25:57.858042   16767 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 10:25:57.950721   16767 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0729 10:26:04.404363   16767 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 10:26:04.404664   16767 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 10:26:04.869296   16767 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 10:26:04.869525   16767 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/download-only-279000/config.json ...
	I0729 10:26:04.869550   16767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/download-only-279000/config.json: {Name:mk7595d151143042e3f9c058750f3b306d7ff9c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:26:04.869887   16767 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 10:26:04.870160   16767 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19338-16127/.minikube/cache/darwin/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-279000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-279000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-279000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-159000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-159000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-159000
--- PASS: TestDownloadOnlyKic (1.52s)

                                                
                                    
x
+
TestBinaryMirror (1.31s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-771000 --alsologtostderr --binary-mirror http://127.0.0.1:56051 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-771000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-771000
--- PASS: TestBinaryMirror (1.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.15s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-254000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-254000: exit status 85 (148.074977ms)

                                                
                                                
-- stdout --
	* Profile "addons-254000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-254000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.15s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-254000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-254000: exit status 85 (169.327327ms)

                                                
                                                
-- stdout --
	* Profile "addons-254000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-254000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/Setup (221.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-254000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-254000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m41.795036139s)
--- PASS: TestAddons/Setup (221.80s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.4s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 15.862714ms
addons_test.go:905: volcano-admission stabilized in 16.041298ms
addons_test.go:897: volcano-scheduler stabilized in 16.108796ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-5wpq7" [257d3a2c-4383-42e1-bb6b-2e7de6412cdf] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003935556s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-bdbsh" [eab4d02a-9f05-475f-b99b-088817db26cf] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003715984s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-m72sb" [7e45d218-6707-4f64-bafe-324979545abd] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003594024s
addons_test.go:932: (dbg) Run:  kubectl --context addons-254000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-254000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-254000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [fabcb6f8-01b5-49bc-98b4-00edc4b7fe47] Pending
helpers_test.go:344: "test-job-nginx-0" [fabcb6f8-01b5-49bc-98b4-00edc4b7fe47] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [fabcb6f8-01b5-49bc-98b4-00edc4b7fe47] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.004442283s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-254000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-254000 addons disable volcano --alsologtostderr -v=1: (10.09181932s)
--- PASS: TestAddons/serial/Volcano (40.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-254000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-254000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-h6gqw" [ec902e36-90c4-41a5-85f3-8f1da33354fe] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003962603s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-254000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-254000: (5.658309806s)
--- PASS: TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.850455ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-v2p44" [9fd2f5b6-2288-41aa-a6d2-642be4e5b11b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005516117s
addons_test.go:417: (dbg) Run:  kubectl --context addons-254000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-254000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.63s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.26763ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-2wqg6" [4c3c9481-bb29-4df3-8eb5-d2e415a96b10] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004852896s
addons_test.go:475: (dbg) Run:  kubectl --context addons-254000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-254000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.201630094s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-254000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.554392ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-254000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-254000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [08c3a1c8-08a7-45cf-a0f3-8b1156097d41] Pending
helpers_test.go:344: "task-pv-pod" [08c3a1c8-08a7-45cf-a0f3-8b1156097d41] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [08c3a1c8-08a7-45cf-a0f3-8b1156097d41] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.006020255s
addons_test.go:590: (dbg) Run:  kubectl --context addons-254000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-254000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-254000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-254000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-254000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-254000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-254000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3b01bbaa-a2a7-461e-b7f6-47e338cfc88d] Pending
helpers_test.go:344: "task-pv-pod-restore" [3b01bbaa-a2a7-461e-b7f6-47e338cfc88d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3b01bbaa-a2a7-461e-b7f6-47e338cfc88d] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005770122s
addons_test.go:632: (dbg) Run:  kubectl --context addons-254000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-254000 delete pod task-pv-pod-restore: (1.065443475s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-254000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-254000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-254000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-254000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.690682573s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-254000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-254000 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-254000 --alsologtostderr -v=1: (1.074152119s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-b47xj" [6fb6c7f7-6223-49c7-8f08-b14f437f9d97] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-b47xj" [6fb6c7f7-6223-49c7-8f08-b14f437f9d97] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00514123s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-254000 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (12.35s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-26zv7" [41619a26-c0da-4872-a271-1fd173fae664] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00485011s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-254000
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (42.63s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-254000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-254000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-254000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [99af3a74-ca82-4e18-b583-316def2117d4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [99af3a74-ca82-4e18-b583-316def2117d4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [99af3a74-ca82-4e18-b583-316def2117d4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003159529s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-254000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-254000 ssh "cat /opt/local-path-provisioner/pvc-b8468ac0-93d0-404e-9261-682b0b10368d_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-254000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-254000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-254000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-254000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (32.864043475s)
--- PASS: TestAddons/parallel/LocalPath (42.63s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9hh5l" [4045de43-c939-469c-9955-bf2428b7d7ee] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004339321s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-254000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-krhtb" [f894f1d6-f43a-4b20-8599-3b95001bdbaf] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005198568s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-254000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-254000 addons disable yakd --alsologtostderr -v=1: (5.623849786s)
--- PASS: TestAddons/parallel/Yakd (11.63s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-254000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-254000: (10.861054851s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-254000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-254000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-254000
--- PASS: TestAddons/StoppedEnableDisable (11.42s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.73s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.73s)

                                                
                                    
x
+
TestErrorSpam/setup (21.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-521000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-521000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 --driver=docker : (21.108752729s)
--- PASS: TestErrorSpam/setup (21.11s)

                                                
                                    
x
+
TestErrorSpam/start (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 start --dry-run
--- PASS: TestErrorSpam/start (1.84s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 pause
--- PASS: TestErrorSpam/pause (1.39s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (2.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 stop: (1.907800234s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-521000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-521000 stop
--- PASS: TestErrorSpam/stop (2.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19338-16127/.minikube/files/etc/test/nested/copy/16665/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-516000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-516000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (37.297040712s)
--- PASS: TestFunctional/serial/StartWithProxy (37.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-516000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-516000 --alsologtostderr -v=8: (29.779777318s)
functional_test.go:659: soft start took 29.780366479s for "functional-516000" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-516000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-516000 cache add registry.k8s.io/pause:3.1: (1.161511221s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-516000 cache add registry.k8s.io/pause:3.3: (1.201715997s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-516000 cache add registry.k8s.io/pause:latest: (1.004179934s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-516000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local683203933/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 cache add minikube-local-cache-test:functional-516000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-516000 cache add minikube-local-cache-test:functional-516000: (1.024777877s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 cache delete minikube-local-cache-test:functional-516000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-516000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-516000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (252.113078ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 kubectl -- --context functional-516000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-516000 kubectl -- --context functional-516000 get pods: (1.160730924s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-516000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-516000 get pods: (1.473416541s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.47s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-516000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0729 10:34:55.753692   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:34:55.760545   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:34:55.771389   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:34:55.793507   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:34:55.833810   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:34:55.913986   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:34:56.074695   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:34:56.395537   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:34:57.035813   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:34:58.316347   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:35:00.877888   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:35:05.999403   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-516000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.39109786s)
functional_test.go:757: restart took 43.391225284s for "functional-516000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.39s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-516000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 logs
E0729 10:35:16.239855   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-516000 logs: (2.919079956s)
--- PASS: TestFunctional/serial/LogsCmd (2.92s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2580871054/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-516000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2580871054/001/logs.txt: (2.924868293s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.93s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-516000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-516000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-516000: exit status 115 (381.276147ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31197 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-516000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-516000 config get cpus: exit status 14 (62.379582ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-516000 config get cpus: exit status 14 (60.517138ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-516000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-516000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 18631: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-516000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-516000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (1.008692954s)

                                                
                                                
-- stdout --
	* [functional-516000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:36:45.333513   18543 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:36:45.333834   18543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:45.333840   18543 out.go:304] Setting ErrFile to fd 2...
	I0729 10:36:45.333844   18543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:45.334038   18543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 10:36:45.355163   18543 out.go:298] Setting JSON to false
	I0729 10:36:45.380048   18543 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5775,"bootTime":1722268830,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 10:36:45.380175   18543 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:36:45.462708   18543 out.go:177] * [functional-516000] minikube v1.33.1 on Darwin 14.5
	I0729 10:36:45.504600   18543 notify.go:220] Checking for updates...
	I0729 10:36:45.525716   18543 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 10:36:45.588792   18543 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	I0729 10:36:45.651599   18543 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 10:36:45.714526   18543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:36:45.798563   18543 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	I0729 10:36:45.840521   18543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:36:45.862607   18543 config.go:182] Loaded profile config "functional-516000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:36:45.863299   18543 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:36:45.888316   18543 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 10:36:45.888472   18543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:36:46.047988   18543 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:74 SystemTime:2024-07-29 17:36:46.02083949 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:htt
ps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-
g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-des
ktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugi
ns/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 10:36:46.107314   18543 out.go:177] * Using the docker driver based on existing profile
	I0729 10:36:46.128689   18543 start.go:297] selected driver: docker
	I0729 10:36:46.128719   18543 start.go:901] validating driver "docker" against &{Name:functional-516000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-516000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:36:46.128857   18543 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:36:46.171327   18543 out.go:177] 
	W0729 10:36:46.192449   18543 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 10:36:46.213404   18543 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-516000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-516000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-516000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (735.956953ms)

                                                
                                                
-- stdout --
	* [functional-516000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19338
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:36:47.034069   18607 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:36:47.034378   18607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:47.034383   18607 out.go:304] Setting ErrFile to fd 2...
	I0729 10:36:47.034387   18607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:47.034589   18607 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 10:36:47.036697   18607 out.go:298] Setting JSON to false
	I0729 10:36:47.066503   18607 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5777,"bootTime":1722268830,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 10:36:47.066591   18607 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 10:36:47.088539   18607 out.go:177] * [functional-516000] minikube v1.33.1 sur Darwin 14.5
	I0729 10:36:47.146845   18607 notify.go:220] Checking for updates...
	I0729 10:36:47.168499   18607 out.go:177]   - MINIKUBE_LOCATION=19338
	I0729 10:36:47.211478   18607 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
	I0729 10:36:47.271376   18607 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 10:36:47.313393   18607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:36:47.387612   18607 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube
	I0729 10:36:47.445710   18607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:36:47.468205   18607 config.go:182] Loaded profile config "functional-516000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:36:47.468841   18607 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:36:47.492725   18607 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 10:36:47.492889   18607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:36:47.573832   18607 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:74 SystemTime:2024-07-29 17:36:47.564411596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768053248 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 10:36:47.595659   18607 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0729 10:36:47.616716   18607 start.go:297] selected driver: docker
	I0729 10:36:47.616745   18607 start.go:901] validating driver "docker" against &{Name:functional-516000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-516000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:36:47.616878   18607 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:36:47.642520   18607 out.go:177] 
	W0729 10:36:47.664316   18607 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 10:36:47.685573   18607 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [01b46ebf-411a-44ae-ab0b-e4bec68043da] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007530231s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-516000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-516000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-516000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-516000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d8766d61-582e-4a34-88c0-42aa542f9c30] Pending
helpers_test.go:344: "sp-pod" [d8766d61-582e-4a34-88c0-42aa542f9c30] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0729 10:36:17.683147   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [d8766d61-582e-4a34-88c0-42aa542f9c30] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004619764s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-516000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-516000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-516000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a37de4e4-8761-4fec-b400-159758ae8c03] Pending
helpers_test.go:344: "sp-pod" [a37de4e4-8761-4fec-b400-159758ae8c03] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a37de4e4-8761-4fec-b400-159758ae8c03] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005241715s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-516000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.25s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh -n functional-516000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 cp functional-516000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd3815532337/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh -n functional-516000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh -n functional-516000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-516000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-8fwls" [0490690f-1e04-4a75-ad91-2ef93e73497f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-8fwls" [0490690f-1e04-4a75-ad91-2ef93e73497f] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.004082918s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-516000 exec mysql-64454c8b5c-8fwls -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-516000 exec mysql-64454c8b5c-8fwls -- mysql -ppassword -e "show databases;": exit status 1 (147.363809ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-516000 exec mysql-64454c8b5c-8fwls -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-516000 exec mysql-64454c8b5c-8fwls -- mysql -ppassword -e "show databases;": exit status 1 (114.425602ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-516000 exec mysql-64454c8b5c-8fwls -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16665/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "sudo cat /etc/test/nested/copy/16665/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16665.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "sudo cat /etc/ssl/certs/16665.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16665.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "sudo cat /usr/share/ca-certificates/16665.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/166652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "sudo cat /etc/ssl/certs/166652.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/166652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "sudo cat /usr/share/ca-certificates/166652.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-516000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-516000 ssh "sudo systemctl is-active crio": exit status 1 (265.235585ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-516000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-516000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-516000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-516000 image ls --format short --alsologtostderr:
I0729 10:36:57.806994   18792 out.go:291] Setting OutFile to fd 1 ...
I0729 10:36:57.807316   18792 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:57.807322   18792 out.go:304] Setting ErrFile to fd 2...
I0729 10:36:57.807326   18792 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:57.807520   18792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
I0729 10:36:57.808238   18792 config.go:182] Loaded profile config "functional-516000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:36:57.808342   18792 config.go:182] Loaded profile config "functional-516000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:36:57.808726   18792 cli_runner.go:164] Run: docker container inspect functional-516000 --format={{.State.Status}}
I0729 10:36:57.831968   18792 ssh_runner.go:195] Run: systemctl --version
I0729 10:36:57.832061   18792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-516000
I0729 10:36:57.853559   18792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56841 SSHKeyPath:/Users/jenkins/minikube-integration/19338-16127/.minikube/machines/functional-516000/id_rsa Username:docker}
I0729 10:36:57.941787   18792 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-516000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| docker.io/library/nginx                     | latest            | a72860cb95fd5 | 188MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/library/minikube-local-cache-test | functional-516000 | 8b12507477ac2 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kicbase/echo-server               | functional-516000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/localhost/my-image                | functional-516000 | bdeac99ac1968 | 1.24MB |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| docker.io/library/nginx                     | alpine            | 1ae23480369fa | 43.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-516000 image ls --format table --alsologtostderr:
I0729 10:37:01.163167   18820 out.go:291] Setting OutFile to fd 1 ...
I0729 10:37:01.163354   18820 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:01.163359   18820 out.go:304] Setting ErrFile to fd 2...
I0729 10:37:01.163363   18820 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:01.163555   18820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
I0729 10:37:01.164133   18820 config.go:182] Loaded profile config "functional-516000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:37:01.164227   18820 config.go:182] Loaded profile config "functional-516000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:37:01.164611   18820 cli_runner.go:164] Run: docker container inspect functional-516000 --format={{.State.Status}}
I0729 10:37:01.183784   18820 ssh_runner.go:195] Run: systemctl --version
I0729 10:37:01.183853   18820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-516000
I0729 10:37:01.201834   18820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56841 SSHKeyPath:/Users/jenkins/minikube-integration/19338-16127/.minikube/machines/functional-516000/id_rsa Username:docker}
I0729 10:37:01.287163   18820 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/07/29 10:37:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-516000 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-516000"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee097
2d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"bdeac99ac1968779250547809c1b1a02eb206b80cc499b9cda609a90135736df","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-516000"],"size":"1240000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"r
epoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"8b12507477ac2c70a6b615bbe2899fdc8090989c4a1b998efc7c3e353fa9aa47","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-516000"],"size":"30"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30
.3"],"size":"111000000"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-516000 image ls --format json --alsologtostderr:
I0729 10:37:00.930298   18816 out.go:291] Setting OutFile to fd 1 ...
I0729 10:37:00.930526   18816 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:00.930531   18816 out.go:304] Setting ErrFile to fd 2...
I0729 10:37:00.930535   18816 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:37:00.930736   18816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
I0729 10:37:00.931324   18816 config.go:182] Loaded profile config "functional-516000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:37:00.931414   18816 config.go:182] Loaded profile config "functional-516000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:37:00.931785   18816 cli_runner.go:164] Run: docker container inspect functional-516000 --format={{.State.Status}}
I0729 10:37:00.951553   18816 ssh_runner.go:195] Run: systemctl --version
I0729 10:37:00.951653   18816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-516000
I0729 10:37:00.972622   18816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56841 SSHKeyPath:/Users/jenkins/minikube-integration/19338-16127/.minikube/machines/functional-516000/id_rsa Username:docker}
I0729 10:37:01.057800   18816 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-516000 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 8b12507477ac2c70a6b615bbe2899fdc8090989c4a1b998efc7c3e353fa9aa47
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-516000
size: "30"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-516000
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-516000 image ls --format yaml --alsologtostderr:
I0729 10:36:58.057229   18798 out.go:291] Setting OutFile to fd 1 ...
I0729 10:36:58.057431   18798 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:58.057436   18798 out.go:304] Setting ErrFile to fd 2...
I0729 10:36:58.057440   18798 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:58.057630   18798 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
I0729 10:36:58.058285   18798 config.go:182] Loaded profile config "functional-516000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:36:58.058384   18798 config.go:182] Loaded profile config "functional-516000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:36:58.058780   18798 cli_runner.go:164] Run: docker container inspect functional-516000 --format={{.State.Status}}
I0729 10:36:58.079968   18798 ssh_runner.go:195] Run: systemctl --version
I0729 10:36:58.080059   18798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-516000
I0729 10:36:58.101997   18798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56841 SSHKeyPath:/Users/jenkins/minikube-integration/19338-16127/.minikube/machines/functional-516000/id_rsa Username:docker}
I0729 10:36:58.189686   18798 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-516000 ssh pgrep buildkitd: exit status 1 (248.770305ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image build -t localhost/my-image:functional-516000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-516000 image build -t localhost/my-image:functional-516000 testdata/build --alsologtostderr: (2.156526602s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-516000 image build -t localhost/my-image:functional-516000 testdata/build --alsologtostderr:
I0729 10:36:58.542501   18808 out.go:291] Setting OutFile to fd 1 ...
I0729 10:36:58.542796   18808 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:58.542801   18808 out.go:304] Setting ErrFile to fd 2...
I0729 10:36:58.542805   18808 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:58.543033   18808 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
I0729 10:36:58.543686   18808 config.go:182] Loaded profile config "functional-516000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:36:58.544347   18808 config.go:182] Loaded profile config "functional-516000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 10:36:58.544762   18808 cli_runner.go:164] Run: docker container inspect functional-516000 --format={{.State.Status}}
I0729 10:36:58.564523   18808 ssh_runner.go:195] Run: systemctl --version
I0729 10:36:58.564595   18808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-516000
I0729 10:36:58.585606   18808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56841 SSHKeyPath:/Users/jenkins/minikube-integration/19338-16127/.minikube/machines/functional-516000/id_rsa Username:docker}
I0729 10:36:58.671983   18808 build_images.go:161] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1291927836.tar
I0729 10:36:58.672095   18808 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 10:36:58.681517   18808 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1291927836.tar
I0729 10:36:58.686091   18808 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1291927836.tar: stat -c "%s %y" /var/lib/minikube/build/build.1291927836.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1291927836.tar': No such file or directory
I0729 10:36:58.686132   18808 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1291927836.tar --> /var/lib/minikube/build/build.1291927836.tar (3072 bytes)
I0729 10:36:58.709753   18808 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1291927836
I0729 10:36:58.719657   18808 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1291927836 -xf /var/lib/minikube/build/build.1291927836.tar
I0729 10:36:58.729200   18808 docker.go:360] Building image: /var/lib/minikube/build/build.1291927836
I0729 10:36:58.729291   18808 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-516000 /var/lib/minikube/build/build.1291927836
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:bdeac99ac1968779250547809c1b1a02eb206b80cc499b9cda609a90135736df done
#8 naming to localhost/my-image:functional-516000 done
#8 DONE 0.0s
I0729 10:37:00.596970   18808 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-516000 /var/lib/minikube/build/build.1291927836: (1.867640732s)
I0729 10:37:00.597048   18808 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1291927836
I0729 10:37:00.606048   18808 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1291927836.tar
I0729 10:37:00.614751   18808 build_images.go:217] Built localhost/my-image:functional-516000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1291927836.tar
I0729 10:37:00.614786   18808 build_images.go:133] succeeded building to: functional-516000
I0729 10:37:00.614791   18808 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.842855753s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-516000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-516000 docker-env) && out/minikube-darwin-amd64 status -p functional-516000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-516000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image load --daemon docker.io/kicbase/echo-server:functional-516000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image load --daemon docker.io/kicbase/echo-server:functional-516000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-516000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image load --daemon docker.io/kicbase/echo-server:functional-516000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image save docker.io/kicbase/echo-server:functional-516000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image rm docker.io/kicbase/echo-server:functional-516000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-516000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 image save --daemon docker.io/kicbase/echo-server:functional-516000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-516000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-516000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-516000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-mj7zk" [d3ee25aa-b0c0-4c65-b293-94be3cf02aee] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0729 10:35:36.721100   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
helpers_test.go:344: "hello-node-6d85cfcfd8-mj7zk" [d3ee25aa-b0c0-4c65-b293-94be3cf02aee] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.005076098s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 service list -o json
functional_test.go:1490: Took "310.170688ms" to run "out/minikube-darwin-amd64 -p functional-516000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-516000 service --namespace=default --https --url hello-node: signal: killed (15.003415753s)

                                                
                                                
-- stdout --
	https://127.0.0.1:57085

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:57085
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-516000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-516000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-516000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-516000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 18414: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-516000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-516000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [dc95fdd8-4f0c-4d87-9767-6aba9ccb4059] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [dc95fdd8-4f0c-4d87-9767-6aba9ccb4059] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00455151s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-516000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-516000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 18437: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-516000 service hello-node --url --format={{.IP}}: signal: killed (15.004092562s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-516000 service hello-node --url: signal: killed (15.004936129s)

                                                
                                                
-- stdout --
	http://127.0.0.1:57152

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:57152
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "285.3337ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "80.621448ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "299.163388ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "101.838123ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-516000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2082914059/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722274604994322000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2082914059/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722274604994322000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2082914059/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722274604994322000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2082914059/001/test-1722274604994322000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-516000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (289.530026ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 17:36 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 17:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 17:36 test-1722274604994322000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh cat /mount-9p/test-1722274604994322000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-516000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8eedaec8-92e9-4be5-94c8-d1f07f8c1b12] Pending
helpers_test.go:344: "busybox-mount" [8eedaec8-92e9-4be5-94c8-d1f07f8c1b12] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8eedaec8-92e9-4be5-94c8-d1f07f8c1b12] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8eedaec8-92e9-4be5-94c8-d1f07f8c1b12] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003618802s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-516000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-516000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port2082914059/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-516000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1450017121/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-516000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.368371ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-516000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1450017121/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-516000 ssh "sudo umount -f /mount-9p": exit status 1 (262.669368ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-516000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-516000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1450017121/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-516000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup293245238/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-516000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup293245238/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-516000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup293245238/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-516000 ssh "findmnt -T" /mount1: exit status 1 (372.405486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-516000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-516000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-516000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup293245238/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-516000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup293245238/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-516000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup293245238/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-516000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-516000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-516000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (106.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-300000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0729 10:37:39.604505   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-300000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m45.425620868s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (106.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-300000 -- rollout status deployment/busybox: (2.915246091s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-9245w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-nswzw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-q2td8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-9245w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-nswzw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-q2td8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-9245w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-nswzw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-q2td8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-9245w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-9245w -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-nswzw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-nswzw -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-q2td8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-300000 -- exec busybox-fc5497c4f-q2td8 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-300000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-300000 -v=7 --alsologtostderr: (19.806924872s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-300000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp testdata/cp-test.txt ha-300000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1899184842/001/cp-test_ha-300000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000:/home/docker/cp-test.txt ha-300000-m02:/home/docker/cp-test_ha-300000_ha-300000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m02 "sudo cat /home/docker/cp-test_ha-300000_ha-300000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000:/home/docker/cp-test.txt ha-300000-m03:/home/docker/cp-test_ha-300000_ha-300000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m03 "sudo cat /home/docker/cp-test_ha-300000_ha-300000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000:/home/docker/cp-test.txt ha-300000-m04:/home/docker/cp-test_ha-300000_ha-300000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m04 "sudo cat /home/docker/cp-test_ha-300000_ha-300000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp testdata/cp-test.txt ha-300000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1899184842/001/cp-test_ha-300000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000-m02:/home/docker/cp-test.txt ha-300000:/home/docker/cp-test_ha-300000-m02_ha-300000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000 "sudo cat /home/docker/cp-test_ha-300000-m02_ha-300000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000-m02:/home/docker/cp-test.txt ha-300000-m03:/home/docker/cp-test_ha-300000-m02_ha-300000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m03 "sudo cat /home/docker/cp-test_ha-300000-m02_ha-300000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000-m02:/home/docker/cp-test.txt ha-300000-m04:/home/docker/cp-test_ha-300000-m02_ha-300000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m04 "sudo cat /home/docker/cp-test_ha-300000-m02_ha-300000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp testdata/cp-test.txt ha-300000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1899184842/001/cp-test_ha-300000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000-m03:/home/docker/cp-test.txt ha-300000:/home/docker/cp-test_ha-300000-m03_ha-300000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000 "sudo cat /home/docker/cp-test_ha-300000-m03_ha-300000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000-m03:/home/docker/cp-test.txt ha-300000-m02:/home/docker/cp-test_ha-300000-m03_ha-300000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m02 "sudo cat /home/docker/cp-test_ha-300000-m03_ha-300000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000-m03:/home/docker/cp-test.txt ha-300000-m04:/home/docker/cp-test_ha-300000-m03_ha-300000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m04 "sudo cat /home/docker/cp-test_ha-300000-m03_ha-300000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp testdata/cp-test.txt ha-300000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000-m04:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile1899184842/001/cp-test_ha-300000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000-m04:/home/docker/cp-test.txt ha-300000:/home/docker/cp-test_ha-300000-m04_ha-300000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000 "sudo cat /home/docker/cp-test_ha-300000-m04_ha-300000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000-m04:/home/docker/cp-test.txt ha-300000-m02:/home/docker/cp-test_ha-300000-m04_ha-300000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m02 "sudo cat /home/docker/cp-test_ha-300000-m04_ha-300000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 cp ha-300000-m04:/home/docker/cp-test.txt ha-300000-m03:/home/docker/cp-test_ha-300000-m04_ha-300000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 ssh -n ha-300000-m03 "sudo cat /home/docker/cp-test_ha-300000-m04_ha-300000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-300000 node stop m02 -v=7 --alsologtostderr: (10.713876339s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-300000 status -v=7 --alsologtostderr: exit status 7 (636.648002ms)

                                                
                                                
-- stdout --
	ha-300000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-300000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-300000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-300000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:39:49.201383   19588 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:39:49.201590   19588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:39:49.201595   19588 out.go:304] Setting ErrFile to fd 2...
	I0729 10:39:49.201599   19588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:39:49.201780   19588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 10:39:49.201968   19588 out.go:298] Setting JSON to false
	I0729 10:39:49.201998   19588 mustload.go:65] Loading cluster: ha-300000
	I0729 10:39:49.202043   19588 notify.go:220] Checking for updates...
	I0729 10:39:49.202319   19588 config.go:182] Loaded profile config "ha-300000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:39:49.202333   19588 status.go:255] checking status of ha-300000 ...
	I0729 10:39:49.202728   19588 cli_runner.go:164] Run: docker container inspect ha-300000 --format={{.State.Status}}
	I0729 10:39:49.221181   19588 status.go:330] ha-300000 host status = "Running" (err=<nil>)
	I0729 10:39:49.221222   19588 host.go:66] Checking if "ha-300000" exists ...
	I0729 10:39:49.221471   19588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-300000
	I0729 10:39:49.239446   19588 host.go:66] Checking if "ha-300000" exists ...
	I0729 10:39:49.239733   19588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:39:49.239795   19588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-300000
	I0729 10:39:49.261753   19588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57300 SSHKeyPath:/Users/jenkins/minikube-integration/19338-16127/.minikube/machines/ha-300000/id_rsa Username:docker}
	I0729 10:39:49.349232   19588 ssh_runner.go:195] Run: systemctl --version
	I0729 10:39:49.353671   19588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:39:49.364456   19588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-300000
	I0729 10:39:49.383205   19588 kubeconfig.go:125] found "ha-300000" server: "https://127.0.0.1:57304"
	I0729 10:39:49.383234   19588 api_server.go:166] Checking apiserver status ...
	I0729 10:39:49.383277   19588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:39:49.394478   19588 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2422/cgroup
	W0729 10:39:49.403543   19588 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2422/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:39:49.403595   19588 ssh_runner.go:195] Run: ls
	I0729 10:39:49.407528   19588 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57304/healthz ...
	I0729 10:39:49.411420   19588 api_server.go:279] https://127.0.0.1:57304/healthz returned 200:
	ok
	I0729 10:39:49.411433   19588 status.go:422] ha-300000 apiserver status = Running (err=<nil>)
	I0729 10:39:49.411445   19588 status.go:257] ha-300000 status: &{Name:ha-300000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:39:49.411457   19588 status.go:255] checking status of ha-300000-m02 ...
	I0729 10:39:49.411704   19588 cli_runner.go:164] Run: docker container inspect ha-300000-m02 --format={{.State.Status}}
	I0729 10:39:49.429857   19588 status.go:330] ha-300000-m02 host status = "Stopped" (err=<nil>)
	I0729 10:39:49.429880   19588 status.go:343] host is not running, skipping remaining checks
	I0729 10:39:49.429889   19588 status.go:257] ha-300000-m02 status: &{Name:ha-300000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:39:49.429907   19588 status.go:255] checking status of ha-300000-m03 ...
	I0729 10:39:49.430198   19588 cli_runner.go:164] Run: docker container inspect ha-300000-m03 --format={{.State.Status}}
	I0729 10:39:49.448176   19588 status.go:330] ha-300000-m03 host status = "Running" (err=<nil>)
	I0729 10:39:49.448203   19588 host.go:66] Checking if "ha-300000-m03" exists ...
	I0729 10:39:49.448535   19588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-300000-m03
	I0729 10:39:49.466855   19588 host.go:66] Checking if "ha-300000-m03" exists ...
	I0729 10:39:49.467136   19588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:39:49.467189   19588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-300000-m03
	I0729 10:39:49.485258   19588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57410 SSHKeyPath:/Users/jenkins/minikube-integration/19338-16127/.minikube/machines/ha-300000-m03/id_rsa Username:docker}
	I0729 10:39:49.571371   19588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:39:49.582800   19588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-300000
	I0729 10:39:49.601322   19588 kubeconfig.go:125] found "ha-300000" server: "https://127.0.0.1:57304"
	I0729 10:39:49.601346   19588 api_server.go:166] Checking apiserver status ...
	I0729 10:39:49.601388   19588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:39:49.612169   19588 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2286/cgroup
	W0729 10:39:49.621301   19588 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2286/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 10:39:49.621362   19588 ssh_runner.go:195] Run: ls
	I0729 10:39:49.625619   19588 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57304/healthz ...
	I0729 10:39:49.629462   19588 api_server.go:279] https://127.0.0.1:57304/healthz returned 200:
	ok
	I0729 10:39:49.629478   19588 status.go:422] ha-300000-m03 apiserver status = Running (err=<nil>)
	I0729 10:39:49.629493   19588 status.go:257] ha-300000-m03 status: &{Name:ha-300000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:39:49.629503   19588 status.go:255] checking status of ha-300000-m04 ...
	I0729 10:39:49.629762   19588 cli_runner.go:164] Run: docker container inspect ha-300000-m04 --format={{.State.Status}}
	I0729 10:39:49.648127   19588 status.go:330] ha-300000-m04 host status = "Running" (err=<nil>)
	I0729 10:39:49.648153   19588 host.go:66] Checking if "ha-300000-m04" exists ...
	I0729 10:39:49.648407   19588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-300000-m04
	I0729 10:39:49.665869   19588 host.go:66] Checking if "ha-300000-m04" exists ...
	I0729 10:39:49.666160   19588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:39:49.666224   19588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-300000-m04
	I0729 10:39:49.684187   19588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57530 SSHKeyPath:/Users/jenkins/minikube-integration/19338-16127/.minikube/machines/ha-300000-m04/id_rsa Username:docker}
	I0729 10:39:49.769782   19588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:39:49.780316   19588 status.go:257] ha-300000-m04 status: &{Name:ha-300000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 node start m02 -v=7 --alsologtostderr
E0729 10:39:55.755783   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-300000 node start m02 -v=7 --alsologtostderr: (19.424027115s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (1.051486411s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (317.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-300000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-300000 -v=7 --alsologtostderr
E0729 10:40:23.446735   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
E0729 10:40:30.801162   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:40:30.806686   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:40:30.817034   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:40:30.837886   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:40:30.879592   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:40:30.959812   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:40:31.121747   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:40:31.441920   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:40:32.082594   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:40:33.363947   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:40:35.924095   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:40:41.044491   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-300000 -v=7 --alsologtostderr: (33.997033255s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-300000 --wait=true -v=7 --alsologtostderr
E0729 10:40:51.284904   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:41:11.765438   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:41:52.726381   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:43:14.647978   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
E0729 10:44:55.759614   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-300000 --wait=true -v=7 --alsologtostderr: (4m43.129359308s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-300000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (317.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 node delete m03 -v=7 --alsologtostderr
E0729 10:45:30.804905   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-300000 node delete m03 -v=7 --alsologtostderr: (9.543642598s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 stop -v=7 --alsologtostderr
E0729 10:45:58.491143   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-300000 stop -v=7 --alsologtostderr: (32.579563925s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-300000 status -v=7 --alsologtostderr: exit status 7 (112.473707ms)

                                                
                                                
-- stdout --
	ha-300000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-300000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-300000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:46:12.474892   20010 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:46:12.475093   20010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:12.475099   20010 out.go:304] Setting ErrFile to fd 2...
	I0729 10:46:12.475103   20010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:46:12.475280   20010 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19338-16127/.minikube/bin
	I0729 10:46:12.475460   20010 out.go:298] Setting JSON to false
	I0729 10:46:12.475484   20010 mustload.go:65] Loading cluster: ha-300000
	I0729 10:46:12.475523   20010 notify.go:220] Checking for updates...
	I0729 10:46:12.475874   20010 config.go:182] Loaded profile config "ha-300000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 10:46:12.475900   20010 status.go:255] checking status of ha-300000 ...
	I0729 10:46:12.476307   20010 cli_runner.go:164] Run: docker container inspect ha-300000 --format={{.State.Status}}
	I0729 10:46:12.494638   20010 status.go:330] ha-300000 host status = "Stopped" (err=<nil>)
	I0729 10:46:12.494683   20010 status.go:343] host is not running, skipping remaining checks
	I0729 10:46:12.494692   20010 status.go:257] ha-300000 status: &{Name:ha-300000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:46:12.494728   20010 status.go:255] checking status of ha-300000-m02 ...
	I0729 10:46:12.495014   20010 cli_runner.go:164] Run: docker container inspect ha-300000-m02 --format={{.State.Status}}
	I0729 10:46:12.512839   20010 status.go:330] ha-300000-m02 host status = "Stopped" (err=<nil>)
	I0729 10:46:12.512859   20010 status.go:343] host is not running, skipping remaining checks
	I0729 10:46:12.512867   20010 status.go:257] ha-300000-m02 status: &{Name:ha-300000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:46:12.512881   20010 status.go:255] checking status of ha-300000-m04 ...
	I0729 10:46:12.513149   20010 cli_runner.go:164] Run: docker container inspect ha-300000-m04 --format={{.State.Status}}
	I0729 10:46:12.530939   20010 status.go:330] ha-300000-m04 host status = "Stopped" (err=<nil>)
	I0729 10:46:12.530960   20010 status.go:343] host is not running, skipping remaining checks
	I0729 10:46:12.530967   20010 status.go:257] ha-300000-m04 status: &{Name:ha-300000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (82.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-300000 --wait=true -v=7 --alsologtostderr --driver=docker 
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-300000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m21.450108407s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (82.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (30.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-300000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-300000 --control-plane -v=7 --alsologtostderr: (29.878793322s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-300000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (30.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-075000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-075000 --driver=docker : (20.550290381s)
--- PASS: TestImageBuild/serial/Setup (20.55s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-075000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-075000: (1.788438242s)
--- PASS: TestImageBuild/serial/NormalBuild (1.79s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-075000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-075000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-075000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (35.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-597000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-597000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (35.908710177s)
--- PASS: TestJSONOutput/start/Command (35.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-597000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-597000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-597000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-597000 --output=json --user=testUser: (10.753311385s)
--- PASS: TestJSONOutput/stop/Command (10.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.64s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-206000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-206000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (362.808705ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f97f6daa-d43d-4675-9af6-e82419e3b411","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-206000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a76db45-5645-41cb-bf28-e1c7685ed354","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19338"}}
	{"specversion":"1.0","id":"9df71b10-fb85-41f3-9c40-f912e53948cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig"}}
	{"specversion":"1.0","id":"cc96dfa4-17d4-4cd4-b3d3-d202277b035c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"f537903a-ccc0-4847-9515-acd9b0e03ef0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fe2e6eeb-94b3-4473-9b2d-6e0801bf9761","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19338-16127/.minikube"}}
	{"specversion":"1.0","id":"9e22f0c8-775e-455f-a0a8-e43c3d33eea7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"41a14210-f6d8-48c0-b8ab-7ecee7704447","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-206000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-206000
--- PASS: TestErrorJSONOutput (0.64s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-604000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-604000 --network=: (20.866155552s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-604000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-604000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-604000: (1.957134196s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.84s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.4s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-293000 --network=bridge
E0729 10:49:55.763260   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-293000 --network=bridge: (20.514445511s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-293000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-293000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-293000: (1.864656059s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.40s)

                                                
                                    
x
+
TestKicExistingNetwork (22.06s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-367000 --network=existing-network
E0729 10:50:30.808780   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-367000 --network=existing-network: (20.030940394s)
helpers_test.go:175: Cleaning up "existing-network-367000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-367000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-367000: (1.854644932s)
--- PASS: TestKicExistingNetwork (22.06s)

                                                
                                    
x
+
TestKicCustomSubnet (22.27s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-991000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-991000 --subnet=192.168.60.0/24: (20.30990502s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-991000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-991000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-991000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-991000: (1.940048324s)
--- PASS: TestKicCustomSubnet (22.27s)

                                                
                                    
x
+
TestKicStaticIP (22.17s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-849000 --static-ip=192.168.200.200
E0729 10:51:18.815313   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-849000 --static-ip=192.168.200.200: (20.010426648s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-849000 ip
helpers_test.go:175: Cleaning up "static-ip-849000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-849000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-849000: (1.995766092s)
--- PASS: TestKicStaticIP (22.17s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (49.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-220000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-220000 --driver=docker : (21.251373228s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-222000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-222000 --driver=docker : (23.009221331s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-220000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-222000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-222000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-222000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-222000: (1.980083695s)
helpers_test.go:175: Cleaning up "first-220000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-220000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-220000: (1.960650712s)
--- PASS: TestMinikubeProfile (49.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-718000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-718000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.020437075s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-718000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-730000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-730000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.383998718s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.38s)

                                                
                                    
x
+
TestPreload (121.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-411000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-411000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m17.190235991s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-411000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-411000 image pull gcr.io/k8s-minikube/busybox: (1.344958543s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-411000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-411000: (10.815693828s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-411000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
E0729 11:39:55.873211   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/addons-254000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-411000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (29.743490424s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-411000 image list
helpers_test.go:175: Cleaning up "test-preload-411000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-411000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-411000: (2.013440473s)
--- PASS: TestPreload (121.33s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (13.04s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
E0729 11:55:30.995120   16665 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19338-16127/.minikube/profiles/functional-516000/client.crt: no such file or directory
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19338
- KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1780121055/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1780121055/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1780121055/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1780121055/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (13.04s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (13.01s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19338
- KUBECONFIG=/Users/jenkins/minikube-integration/19338-16127/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3817580808/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3817580808/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3817580808/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3817580808/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (13.01s)

                                                
                                    

Test skip (19/212)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.712705ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-w8rk2" [9abd3f03-604c-4043-8d96-da36e3828adc] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005741268s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ctjf9" [18828f75-b27a-4534-84f3-f1644611517e] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005297531s
addons_test.go:342: (dbg) Run:  kubectl --context addons-254000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-254000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-254000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.846382074s)
addons_test.go:357: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.91s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-254000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-254000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-254000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [919ebb10-4004-43d4-8ccc-cd4653f58ce8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [919ebb10-4004-43d4-8ccc-cd4653f58ce8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003915493s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-254000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.79s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-516000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-516000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-44wmm" [6826c5af-ef95-4ede-b1df-266ec37f3903] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-44wmm" [6826c5af-ef95-4ede-b1df-266ec37f3903] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003993054s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard