Test Report: Docker_macOS 19337

                    
                      a9f4e4a9a8ef6f7d1064a3bd8285d9113f3d3767:2024-07-29:35545
                    
                

Test fail (22/212)

x
+
TestOffline (754.54s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-789000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-789000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m34.000514786s)

                                                
                                                
-- stdout --
	* [offline-docker-789000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "offline-docker-789000" primary control-plane node in "offline-docker-789000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-789000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:46:58.965578    8863 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:46:58.965880    8863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:46:58.965886    8863 out.go:304] Setting ErrFile to fd 2...
	I0729 04:46:58.965889    8863 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:46:58.966067    8863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:46:58.967725    8863 out.go:298] Setting JSON to false
	I0729 04:46:58.991515    8863 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6388,"bootTime":1722247230,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 04:46:58.991619    8863 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:46:59.012986    8863 out.go:177] * [offline-docker-789000] minikube v1.33.1 on Darwin 14.5
	I0729 04:46:59.055127    8863 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 04:46:59.055171    8863 notify.go:220] Checking for updates...
	I0729 04:46:59.097050    8863 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	I0729 04:46:59.118095    8863 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 04:46:59.138867    8863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:46:59.160050    8863 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	I0729 04:46:59.181068    8863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:46:59.202138    8863 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:46:59.225687    8863 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 04:46:59.225856    8863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 04:46:59.306993    8863 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:170 SystemTime:2024-07-29 11:46:59.296243037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 04:46:59.349426    8863 out.go:177] * Using the docker driver based on user configuration
	I0729 04:46:59.370520    8863 start.go:297] selected driver: docker
	I0729 04:46:59.370549    8863 start.go:901] validating driver "docker" against <nil>
	I0729 04:46:59.370565    8863 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:46:59.374819    8863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 04:46:59.468992    8863 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:170 SystemTime:2024-07-29 11:46:59.458793876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 04:46:59.469178    8863 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:46:59.469382    8863 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:46:59.490348    8863 out.go:177] * Using Docker Desktop driver with root privileges
	I0729 04:46:59.511171    8863 cni.go:84] Creating CNI manager for ""
	I0729 04:46:59.511190    8863 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:46:59.511196    8863 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:46:59.511265    8863 start.go:340] cluster config:
	{Name:offline-docker-789000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-789000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:46:59.532667    8863 out.go:177] * Starting "offline-docker-789000" primary control-plane node in "offline-docker-789000" cluster
	I0729 04:46:59.575417    8863 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 04:46:59.597329    8863 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 04:46:59.639346    8863 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:46:59.639407    8863 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 04:46:59.639421    8863 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 04:46:59.639441    8863 cache.go:56] Caching tarball of preloaded images
	I0729 04:46:59.639676    8863 preload.go:172] Found /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 04:46:59.639697    8863 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:46:59.641229    8863 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/offline-docker-789000/config.json ...
	I0729 04:46:59.641348    8863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/offline-docker-789000/config.json: {Name:mk2fe7b03e5d958c9624274e365381975d68a698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0729 04:46:59.729390    8863 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 04:46:59.729405    8863 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 04:46:59.729514    8863 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 04:46:59.729534    8863 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 04:46:59.729541    8863 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 04:46:59.729550    8863 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 04:46:59.729554    8863 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 04:46:59.953071    8863 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 04:46:59.953137    8863 cache.go:194] Successfully downloaded all kic artifacts
	I0729 04:46:59.953186    8863 start.go:360] acquireMachinesLock for offline-docker-789000: {Name:mk67c9dee849ddb5ba73cf6bc7e98d56a1ee5713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:46:59.953362    8863 start.go:364] duration metric: took 164.918µs to acquireMachinesLock for "offline-docker-789000"
	I0729 04:46:59.953395    8863 start.go:93] Provisioning new machine with config: &{Name:offline-docker-789000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-789000 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:46:59.953464    8863 start.go:125] createHost starting for "" (driver="docker")
	I0729 04:46:59.995308    8863 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 04:46:59.995525    8863 start.go:159] libmachine.API.Create for "offline-docker-789000" (driver="docker")
	I0729 04:46:59.995551    8863 client.go:168] LocalClient.Create starting
	I0729 04:46:59.995647    8863 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 04:46:59.995696    8863 main.go:141] libmachine: Decoding PEM data...
	I0729 04:46:59.995714    8863 main.go:141] libmachine: Parsing certificate...
	I0729 04:46:59.995785    8863 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 04:46:59.995824    8863 main.go:141] libmachine: Decoding PEM data...
	I0729 04:46:59.995832    8863 main.go:141] libmachine: Parsing certificate...
	I0729 04:46:59.996237    8863 cli_runner.go:164] Run: docker network inspect offline-docker-789000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 04:47:00.060762    8863 cli_runner.go:211] docker network inspect offline-docker-789000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 04:47:00.060865    8863 network_create.go:284] running [docker network inspect offline-docker-789000] to gather additional debugging logs...
	I0729 04:47:00.060883    8863 cli_runner.go:164] Run: docker network inspect offline-docker-789000
	W0729 04:47:00.084859    8863 cli_runner.go:211] docker network inspect offline-docker-789000 returned with exit code 1
	I0729 04:47:00.084886    8863 network_create.go:287] error running [docker network inspect offline-docker-789000]: docker network inspect offline-docker-789000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-789000 not found
	I0729 04:47:00.084902    8863 network_create.go:289] output of [docker network inspect offline-docker-789000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-789000 not found
	
	** /stderr **
	I0729 04:47:00.085038    8863 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:47:00.104620    8863 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:47:00.106039    8863 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:47:00.106498    8863 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001469bf0}
	I0729 04:47:00.106517    8863 network_create.go:124] attempt to create docker network offline-docker-789000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0729 04:47:00.106588    8863 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-789000 offline-docker-789000
	I0729 04:47:00.268121    8863 network_create.go:108] docker network offline-docker-789000 192.168.67.0/24 created
	I0729 04:47:00.268168    8863 kic.go:121] calculated static IP "192.168.67.2" for the "offline-docker-789000" container
	I0729 04:47:00.268277    8863 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 04:47:00.288101    8863 cli_runner.go:164] Run: docker volume create offline-docker-789000 --label name.minikube.sigs.k8s.io=offline-docker-789000 --label created_by.minikube.sigs.k8s.io=true
	I0729 04:47:00.307544    8863 oci.go:103] Successfully created a docker volume offline-docker-789000
	I0729 04:47:00.307680    8863 cli_runner.go:164] Run: docker run --rm --name offline-docker-789000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-789000 --entrypoint /usr/bin/test -v offline-docker-789000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 04:47:01.077408    8863 oci.go:107] Successfully prepared a docker volume offline-docker-789000
	I0729 04:47:01.077458    8863 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:47:01.077480    8863 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 04:47:01.077581    8863 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-789000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 04:52:59.996717    8863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 04:52:59.996909    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:53:00.017394    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:53:00.017500    8863 retry.go:31] will retry after 355.310866ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:00.375197    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:53:00.393640    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:53:00.393740    8863 retry.go:31] will retry after 352.955132ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:00.749144    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:53:00.768334    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:53:00.768454    8863 retry.go:31] will retry after 759.48385ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:01.528381    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:53:01.548097    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	W0729 04:53:01.548201    8863 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	
	W0729 04:53:01.548227    8863 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:01.548285    8863 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 04:53:01.548335    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:53:01.565278    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:53:01.565388    8863 retry.go:31] will retry after 207.454312ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:01.775195    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:53:01.794766    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:53:01.794861    8863 retry.go:31] will retry after 528.920602ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:02.325631    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:53:02.345148    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:53:02.345254    8863 retry.go:31] will retry after 769.903732ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:03.117583    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:53:03.137231    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	W0729 04:53:03.137337    8863 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	
	W0729 04:53:03.137359    8863 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:03.137373    8863 start.go:128] duration metric: took 6m3.184784131s to createHost
	I0729 04:53:03.137379    8863 start.go:83] releasing machines lock for "offline-docker-789000", held for 6m3.184894631s
	W0729 04:53:03.137396    8863 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0729 04:53:03.137832    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:03.155139    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:03.155193    8863 delete.go:82] Unable to get host status for offline-docker-789000, assuming it has already been deleted: state: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	W0729 04:53:03.155265    8863 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0729 04:53:03.155278    8863 start.go:729] Will try again in 5 seconds ...
	I0729 04:53:08.155647    8863 start.go:360] acquireMachinesLock for offline-docker-789000: {Name:mk67c9dee849ddb5ba73cf6bc7e98d56a1ee5713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:53:08.155879    8863 start.go:364] duration metric: took 182.976µs to acquireMachinesLock for "offline-docker-789000"
	I0729 04:53:08.155912    8863 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:53:08.155928    8863 fix.go:54] fixHost starting: 
	I0729 04:53:08.156363    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:08.176246    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:08.176293    8863 fix.go:112] recreateIfNeeded on offline-docker-789000: state= err=unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:08.176311    8863 fix.go:117] machineExists: false. err=machine does not exist
	I0729 04:53:08.198304    8863 out.go:177] * docker "offline-docker-789000" container is missing, will recreate.
	I0729 04:53:08.239610    8863 delete.go:124] DEMOLISHING offline-docker-789000 ...
	I0729 04:53:08.239784    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:08.258577    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	W0729 04:53:08.258626    8863 stop.go:83] unable to get state: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:08.258643    8863 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:08.259019    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:08.275969    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:08.276023    8863 delete.go:82] Unable to get host status for offline-docker-789000, assuming it has already been deleted: state: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:08.276098    8863 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-789000
	W0729 04:53:08.293098    8863 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-789000 returned with exit code 1
	I0729 04:53:08.293132    8863 kic.go:371] could not find the container offline-docker-789000 to remove it. will try anyways
	I0729 04:53:08.293209    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:08.310198    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	W0729 04:53:08.310246    8863 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:08.310324    8863 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-789000 /bin/bash -c "sudo init 0"
	W0729 04:53:08.327306    8863 cli_runner.go:211] docker exec --privileged -t offline-docker-789000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 04:53:08.327342    8863 oci.go:650] error shutdown offline-docker-789000: docker exec --privileged -t offline-docker-789000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:09.327837    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:09.347625    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:09.347674    8863 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:09.347688    8863 oci.go:664] temporary error: container offline-docker-789000 status is  but expect it to be exited
	I0729 04:53:09.347714    8863 retry.go:31] will retry after 381.651738ms: couldn't verify container is exited. %v: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:09.731664    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:09.751463    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:09.751515    8863 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:09.751523    8863 oci.go:664] temporary error: container offline-docker-789000 status is  but expect it to be exited
	I0729 04:53:09.751554    8863 retry.go:31] will retry after 766.020389ms: couldn't verify container is exited. %v: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:10.517927    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:10.536416    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:10.536466    8863 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:10.536480    8863 oci.go:664] temporary error: container offline-docker-789000 status is  but expect it to be exited
	I0729 04:53:10.536507    8863 retry.go:31] will retry after 1.391401146s: couldn't verify container is exited. %v: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:11.928328    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:11.948129    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:11.948175    8863 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:11.948184    8863 oci.go:664] temporary error: container offline-docker-789000 status is  but expect it to be exited
	I0729 04:53:11.948210    8863 retry.go:31] will retry after 1.622350833s: couldn't verify container is exited. %v: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:13.573021    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:13.592686    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:13.592731    8863 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:13.592741    8863 oci.go:664] temporary error: container offline-docker-789000 status is  but expect it to be exited
	I0729 04:53:13.592766    8863 retry.go:31] will retry after 2.798139928s: couldn't verify container is exited. %v: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:16.393269    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:16.413888    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:16.413930    8863 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:16.413942    8863 oci.go:664] temporary error: container offline-docker-789000 status is  but expect it to be exited
	I0729 04:53:16.413968    8863 retry.go:31] will retry after 3.53081517s: couldn't verify container is exited. %v: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:19.946261    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:19.965415    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:19.965460    8863 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:19.965469    8863 oci.go:664] temporary error: container offline-docker-789000 status is  but expect it to be exited
	I0729 04:53:19.965504    8863 retry.go:31] will retry after 4.280480428s: couldn't verify container is exited. %v: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:24.246837    8863 cli_runner.go:164] Run: docker container inspect offline-docker-789000 --format={{.State.Status}}
	W0729 04:53:24.266902    8863 cli_runner.go:211] docker container inspect offline-docker-789000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:24.266944    8863 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:53:24.266954    8863 oci.go:664] temporary error: container offline-docker-789000 status is  but expect it to be exited
	I0729 04:53:24.266995    8863 oci.go:88] couldn't shut down offline-docker-789000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	 
	I0729 04:53:24.267090    8863 cli_runner.go:164] Run: docker rm -f -v offline-docker-789000
	I0729 04:53:24.284819    8863 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-789000
	W0729 04:53:24.301957    8863 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-789000 returned with exit code 1
	I0729 04:53:24.302058    8863 cli_runner.go:164] Run: docker network inspect offline-docker-789000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:53:24.320266    8863 cli_runner.go:164] Run: docker network rm offline-docker-789000
	I0729 04:53:24.402987    8863 fix.go:124] Sleeping 1 second for extra luck!
	I0729 04:53:25.405143    8863 start.go:125] createHost starting for "" (driver="docker")
	I0729 04:53:25.428387    8863 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 04:53:25.428563    8863 start.go:159] libmachine.API.Create for "offline-docker-789000" (driver="docker")
	I0729 04:53:25.428597    8863 client.go:168] LocalClient.Create starting
	I0729 04:53:25.428814    8863 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 04:53:25.428921    8863 main.go:141] libmachine: Decoding PEM data...
	I0729 04:53:25.428959    8863 main.go:141] libmachine: Parsing certificate...
	I0729 04:53:25.429041    8863 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 04:53:25.429123    8863 main.go:141] libmachine: Decoding PEM data...
	I0729 04:53:25.429138    8863 main.go:141] libmachine: Parsing certificate...
	I0729 04:53:25.430335    8863 cli_runner.go:164] Run: docker network inspect offline-docker-789000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 04:53:25.448577    8863 cli_runner.go:211] docker network inspect offline-docker-789000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 04:53:25.448684    8863 network_create.go:284] running [docker network inspect offline-docker-789000] to gather additional debugging logs...
	I0729 04:53:25.448700    8863 cli_runner.go:164] Run: docker network inspect offline-docker-789000
	W0729 04:53:25.466073    8863 cli_runner.go:211] docker network inspect offline-docker-789000 returned with exit code 1
	I0729 04:53:25.466101    8863 network_create.go:287] error running [docker network inspect offline-docker-789000]: docker network inspect offline-docker-789000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-789000 not found
	I0729 04:53:25.466111    8863 network_create.go:289] output of [docker network inspect offline-docker-789000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-789000 not found
	
	** /stderr **
	I0729 04:53:25.466257    8863 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:53:25.485391    8863 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:53:25.486952    8863 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:53:25.488266    8863 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:53:25.489860    8863 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:53:25.491433    8863 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:53:25.491753    8863 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00156d440}
	I0729 04:53:25.491766    8863 network_create.go:124] attempt to create docker network offline-docker-789000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0729 04:53:25.491834    8863 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-789000 offline-docker-789000
	I0729 04:53:25.555431    8863 network_create.go:108] docker network offline-docker-789000 192.168.94.0/24 created
	I0729 04:53:25.555470    8863 kic.go:121] calculated static IP "192.168.94.2" for the "offline-docker-789000" container
	I0729 04:53:25.555571    8863 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 04:53:25.574744    8863 cli_runner.go:164] Run: docker volume create offline-docker-789000 --label name.minikube.sigs.k8s.io=offline-docker-789000 --label created_by.minikube.sigs.k8s.io=true
	I0729 04:53:25.591704    8863 oci.go:103] Successfully created a docker volume offline-docker-789000
	I0729 04:53:25.591844    8863 cli_runner.go:164] Run: docker run --rm --name offline-docker-789000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-789000 --entrypoint /usr/bin/test -v offline-docker-789000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 04:53:25.855816    8863 oci.go:107] Successfully prepared a docker volume offline-docker-789000
	I0729 04:53:25.855853    8863 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:53:25.855866    8863 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 04:53:25.855968    8863 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-789000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 04:59:25.428634    8863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 04:59:25.428757    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:25.449870    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:25.449980    8863 retry.go:31] will retry after 258.913477ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:25.709867    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:25.730211    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:25.730312    8863 retry.go:31] will retry after 343.537751ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:26.075386    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:26.095206    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:26.095328    8863 retry.go:31] will retry after 578.254526ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:26.675342    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:26.694905    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:26.695020    8863 retry.go:31] will retry after 538.701411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:27.236094    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:27.255659    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	W0729 04:59:27.255766    8863 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	
	W0729 04:59:27.255791    8863 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:27.255848    8863 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 04:59:27.255901    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:27.273857    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:27.273953    8863 retry.go:31] will retry after 155.821502ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:27.430397    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:27.449212    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:27.449322    8863 retry.go:31] will retry after 532.507226ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:27.984232    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:28.005186    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:28.005283    8863 retry.go:31] will retry after 722.680165ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:28.730361    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:28.751225    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	W0729 04:59:28.751330    8863 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	
	W0729 04:59:28.751347    8863 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:28.751360    8863 start.go:128] duration metric: took 6m3.34705948s to createHost
	I0729 04:59:28.751437    8863 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 04:59:28.751496    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:28.768675    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:28.768766    8863 retry.go:31] will retry after 161.609274ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:28.932745    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:28.951062    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:28.951147    8863 retry.go:31] will retry after 313.495914ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:29.265056    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:29.284797    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:29.284888    8863 retry.go:31] will retry after 501.328109ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:29.787073    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:29.807892    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:29.807983    8863 retry.go:31] will retry after 872.344044ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:30.681203    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:30.762184    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	W0729 04:59:30.762291    8863 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	
	W0729 04:59:30.762308    8863 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:30.762375    8863 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 04:59:30.762441    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:30.779259    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:30.779354    8863 retry.go:31] will retry after 268.875105ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:31.049094    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:31.068904    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:31.069000    8863 retry.go:31] will retry after 236.860842ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:31.306728    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:31.326735    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:31.326838    8863 retry.go:31] will retry after 778.33525ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:32.107581    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:32.126596    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	I0729 04:59:32.126689    8863 retry.go:31] will retry after 612.745867ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:32.739853    8863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000
	W0729 04:59:32.759465    8863 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000 returned with exit code 1
	W0729 04:59:32.759564    8863 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	
	W0729 04:59:32.759580    8863 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-789000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-789000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000
	I0729 04:59:32.759591    8863 fix.go:56] duration metric: took 6m24.604607458s for fixHost
	I0729 04:59:32.759598    8863 start.go:83] releasing machines lock for "offline-docker-789000", held for 6m24.604648747s
	W0729 04:59:32.759669    8863 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-789000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-789000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 04:59:32.780530    8863 out.go:177] 
	W0729 04:59:32.802208    8863 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 04:59:32.802279    8863 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0729 04:59:32.802320    8863 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0729 04:59:32.824223    8863 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-789000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-07-29 04:59:32.919595 -0700 PDT m=+6003.253613222
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-789000
helpers_test.go:235: (dbg) docker inspect offline-docker-789000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-789000",
	        "Id": "c8c130481f164f04874bd17478b600e512dcab8d947d69d5ef2601b59f060adf",
	        "Created": "2024-07-29T11:53:25.506521039Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-789000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-789000 -n offline-docker-789000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-789000 -n offline-docker-789000: exit status 7 (72.778903ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:59:33.012107    9255 status.go:249] status error: host: state: unknown state "offline-docker-789000": docker container inspect offline-docker-789000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-789000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-789000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-789000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-789000
--- FAIL: TestOffline (754.54s)

                                                
                                    
x
+
TestCertOptions (7201.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-621000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0729 05:13:53.898061    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 05:13:57.506813    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 05:14:14.448926    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 05:18:53.895524    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 05:19:14.445135    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (7m19s)
	TestCertOptions (6m44s)
	TestNetworkPlugins (32m31s)

                                                
                                                
goroutine 2555 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000134b60, 0xc000971bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0009264f8, {0x78b0ae0, 0x2a, 0x2a}, {0x3382825?, 0x4ebbf89?, 0x78d3aa0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000844640)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000844640)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0006cad00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 653 [syscall, 6 minutes]:
syscall.syscall6(0xc001edff80?, 0x1000000000010?, 0x10000000019?, 0x4ef379a8?, 0x90?, 0x81f5108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0013318a0?, 0x32c30c5?, 0x90?, 0x648be80?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x33f39e5?, 0xc0013318d4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc001ee8360)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000207500)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000207500)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0001341a0, 0xc000207500)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertOptions(0xc0001341a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x445
testing.tRunner(0xc0001341a0, 0x651f838)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2552 [IO wait]:
internal/poll.runtime_pollWait(0x4f169908, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001c72b40?, 0xc001a84b09?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001c72b40, {0xc001a84b09, 0x4f7, 0x4f7})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001442118, {0xc001a84b09?, 0xc000584e00?, 0x223?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ede990, {0x652a538, 0xc0014960e0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x652a678, 0xc001ede990}, {0x652a538, 0xc0014960e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0015d7678?, {0x652a678, 0xc001ede990})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x7872300?, {0x652a678?, 0xc001ede990?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x652a678, 0xc001ede990}, {0x652a5f8, 0xc001442118}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001a58360?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 653
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 179 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000980e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 178
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2235 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e49c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e49c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013e49c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013e49c0, 0xc0008bc080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2234
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 13 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 12
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 1125 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc001ad4780, 0xc001a8efc0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1124
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 654 [syscall, 7 minutes]:
syscall.syscall6(0xc001b0bf80?, 0x1000000000010?, 0x10100000019?, 0x4f3184f8?, 0x90?, 0x81f55b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc00174fa40?, 0x32c30c5?, 0x90?, 0x648be80?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x33f39e5?, 0xc00174fa74, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc00201c060)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000002000)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000002000)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc000135040, 0xc000002000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc000135040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc000135040, 0x651f830)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2522 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x4f168c70, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001c72060?, 0xc001323315?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001c72060, {0xc001323315, 0x4eb, 0x4eb})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0014420c0, {0xc001323315?, 0x9?, 0x22c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001b0a120, {0x652a538, 0xc001496108})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x652a678, 0xc001b0a120}, {0x652a538, 0xc001496108}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x652a678, 0xc001b0a120})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x7872300?, {0x652a678?, 0xc001b0a120?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x652a678, 0xc001b0a120}, {0x652a5f8, 0xc0014420c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001667c80?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 654
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 180 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b0f080, 0xc00014c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 178
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1318 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b4ea80, 0xc002065260)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1317
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 970 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x654f820, 0xc00014c000}, 0xc00176c750, 0xc0015b6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x654f820, 0xc00014c000}, 0xa0?, 0xc00176c750, 0xc00176c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x654f820?, 0xc00014c000?}, 0x3848016?, 0xc001c6ad80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00176c7d0?, 0x343c9a4?, 0xc000067da0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 986
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1858 [syscall, 97 minutes]:
syscall.syscall(0x0?, 0xc000af8c48?, 0xc0015dd6f0?, 0x3362dbd?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc001dd2cc0?, 0x604?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 1850
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 186 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000b0f050, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x6014700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000980d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b0f080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0000698f0, {0x652bb20, 0xc000167a70}, 0x1, 0xc00014c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0000698f0, 0x3b9aca00, 0x0, 0x1, 0xc00014c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 180
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 187 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x654f820, 0xc00014c000}, 0xc000112750, 0xc0015ccf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x654f820, 0xc00014c000}, 0x0?, 0xc000112750, 0xc000112798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x654f820?, 0xc00014c000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0001127d0?, 0x343c9a4?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 180
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 188 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 187
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2239 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e5040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e5040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013e5040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013e5040, 0xc0008bc400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2234
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1280 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc001ad4480, 0xc000066de0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 831
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1368 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0xc001640b40)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1389
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2164 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020f89c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020f89c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0020f89c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0020f89c0, 0x651f930)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1369 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0xc001640b40)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1389
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2249 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020f9040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020f9040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc0020f9040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:292 +0xb4
testing.tRunner(0xc0020f9040, 0x651f8f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2163 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020f8820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020f8820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0020f8820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc0020f8820, 0x651f920)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2236 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e4b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e4b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013e4b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013e4b60, 0xc0008bc180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2234
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2240 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e51e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e51e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013e51e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013e51e0, 0xc0008bc480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2234
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 726 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0x4f169620, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000972800?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000972800)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000972800)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0020c8be0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0020c8be0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00087e0f0, {0x6542710, 0xc0020c8be0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc00087e0f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0020f8340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 723
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2162 [chan receive, 32 minutes]:
testing.(*T).Run(0xc0020f81a0, {0x4e624ca?, 0x4e8c18ffe88?}, 0xc0014e0048)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0020f81a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0020f81a0, 0x651f918)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2259 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e56c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e56c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013e56c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013e56c0, 0xc0008bc600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2234
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 986 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006f4780, 0xc00014c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 860
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2247 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020f8d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020f8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0020f8d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc0020f8d00, 0x651f968)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 985 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001b52540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 860
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2238 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e4ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e4ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013e4ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013e4ea0, 0xc0008bc380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2234
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2258 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e5520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e5520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013e5520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013e5520, 0xc0008bc580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2234
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2257 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e5380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e5380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013e5380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013e5380, 0xc0008bc500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2234
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 971 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 970
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 969 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0006f4650, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x6014700?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001b52360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006f4780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0012fcc50, {0x652bb20, 0xc0013c06f0}, 0x1, 0xc00014c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0012fcc50, 0x3b9aca00, 0x0, 0x1, 0xc00014c000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 986
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2237 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013e4d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013e4d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013e4d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013e4d00, 0xc0008bc300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2234
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1269 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0xc00148bb00, 0xc00014d6e0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1268
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2553 [IO wait]:
internal/poll.runtime_pollWait(0x4f169338, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001c72c00?, 0xc0015d4c63?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001c72c00, {0xc0015d4c63, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001442150, {0xc0015d4c63?, 0xc001ae2700?, 0x63?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ede9c0, {0x652a538, 0xc0014960e8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x652a678, 0xc001ede9c0}, {0x652a538, 0xc0014960e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0015da678?, {0x652a678, 0xc001ede9c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x7872300?, {0x652a678?, 0xc001ede9c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x652a678, 0xc001ede9c0}, {0x652a5f8, 0xc001442150}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001590240?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 653
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2246 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020f8b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020f8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0020f8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:85 +0x89
testing.tRunner(0xc0020f8b60, 0x651f940)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2234 [chan receive, 32 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0013e4000, 0xc0014e0048)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2162
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2248 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020f8ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020f8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0020f8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc0020f8ea0, 0x651f8e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2523 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x4f169810, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001c72120?, 0xc0015d4463?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001c72120, {0xc0015d4463, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0014420e8, {0xc0015d4463?, 0x33f650d?, 0x63?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001b0a150, {0x652a538, 0xc001496110})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x652a678, 0xc001b0a150}, {0x652a538, 0xc001496110}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x77e4980?, {0x652a678, 0xc001b0a150})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x7872300?, {0x652a678?, 0xc001b0a150?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x652a678, 0xc001b0a150}, {0x652a5f8, 0xc0014420e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001dcafa0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 654
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2221 [chan receive, 32 minutes]:
testing.(*testContext).waitParallel(0xc0006adef0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0020f84e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0020f84e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0020f84e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc0020f84e0, 0x651f960)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2524 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc000002000, 0xc001a58240)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 654
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2554 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc000207500, 0xc001590ae0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 653
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                    
x
+
TestDockerFlags (755.49s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-043000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0729 05:03:53.848471    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 05:04:14.397484    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 05:08:36.912621    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 05:08:53.848805    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 05:09:14.396614    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-043000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m34.622054975s)

                                                
                                                
-- stdout --
	* [docker-flags-043000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "docker-flags-043000" primary control-plane node in "docker-flags-043000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-043000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 05:00:10.149829    9610 out.go:291] Setting OutFile to fd 1 ...
	I0729 05:00:10.150131    9610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:00:10.150136    9610 out.go:304] Setting ErrFile to fd 2...
	I0729 05:00:10.150140    9610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 05:00:10.150307    9610 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 05:00:10.151815    9610 out.go:298] Setting JSON to false
	I0729 05:00:10.174279    9610 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7180,"bootTime":1722247230,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 05:00:10.174368    9610 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 05:00:10.196518    9610 out.go:177] * [docker-flags-043000] minikube v1.33.1 on Darwin 14.5
	I0729 05:00:10.239577    9610 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 05:00:10.239629    9610 notify.go:220] Checking for updates...
	I0729 05:00:10.282268    9610 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	I0729 05:00:10.303428    9610 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 05:00:10.324234    9610 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 05:00:10.345320    9610 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	I0729 05:00:10.366536    9610 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 05:00:10.389134    9610 config.go:182] Loaded profile config "force-systemd-flag-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 05:00:10.389306    9610 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 05:00:10.413708    9610 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 05:00:10.413887    9610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 05:00:10.493645    9610 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:114 OomKillDisable:false NGoroutines:230 SystemTime:2024-07-29 12:00:10.484905792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 05:00:10.536239    9610 out.go:177] * Using the docker driver based on user configuration
	I0729 05:00:10.557193    9610 start.go:297] selected driver: docker
	I0729 05:00:10.557222    9610 start.go:901] validating driver "docker" against <nil>
	I0729 05:00:10.557237    9610 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 05:00:10.561552    9610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 05:00:10.637075    9610 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:114 OomKillDisable:false NGoroutines:230 SystemTime:2024-07-29 12:00:10.628137753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 05:00:10.637235    9610 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 05:00:10.637431    9610 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0729 05:00:10.659097    9610 out.go:177] * Using Docker Desktop driver with root privileges
	I0729 05:00:10.681013    9610 cni.go:84] Creating CNI manager for ""
	I0729 05:00:10.681054    9610 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 05:00:10.681067    9610 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 05:00:10.681175    9610 start.go:340] cluster config:
	{Name:docker-flags-043000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-043000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 05:00:10.702867    9610 out.go:177] * Starting "docker-flags-043000" primary control-plane node in "docker-flags-043000" cluster
	I0729 05:00:10.744843    9610 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 05:00:10.765833    9610 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 05:00:10.808012    9610 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:00:10.808075    9610 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 05:00:10.808092    9610 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 05:00:10.808115    9610 cache.go:56] Caching tarball of preloaded images
	I0729 05:00:10.808337    9610 preload.go:172] Found /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 05:00:10.808361    9610 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 05:00:10.809065    9610 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/docker-flags-043000/config.json ...
	I0729 05:00:10.809346    9610 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/docker-flags-043000/config.json: {Name:mkd236f5c8cc32efb4a1d3d9aec149e9e225bc60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0729 05:00:10.833982    9610 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 05:00:10.833994    9610 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 05:00:10.834111    9610 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 05:00:10.834130    9610 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 05:00:10.834136    9610 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 05:00:10.834144    9610 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 05:00:10.834149    9610 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 05:00:10.837189    9610 image.go:273] response: 
	I0729 05:00:10.964286    9610 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 05:00:10.964353    9610 cache.go:194] Successfully downloaded all kic artifacts
	I0729 05:00:10.964405    9610 start.go:360] acquireMachinesLock for docker-flags-043000: {Name:mkd1fa98391682e969c9c3a610b62de7993a3697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:00:10.964591    9610 start.go:364] duration metric: took 156.136µs to acquireMachinesLock for "docker-flags-043000"
	I0729 05:00:10.964621    9610 start.go:93] Provisioning new machine with config: &{Name:docker-flags-043000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-043000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 05:00:10.964684    9610 start.go:125] createHost starting for "" (driver="docker")
	I0729 05:00:11.007114    9610 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 05:00:11.007328    9610 start.go:159] libmachine.API.Create for "docker-flags-043000" (driver="docker")
	I0729 05:00:11.007354    9610 client.go:168] LocalClient.Create starting
	I0729 05:00:11.007447    9610 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 05:00:11.007498    9610 main.go:141] libmachine: Decoding PEM data...
	I0729 05:00:11.007512    9610 main.go:141] libmachine: Parsing certificate...
	I0729 05:00:11.007566    9610 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 05:00:11.007605    9610 main.go:141] libmachine: Decoding PEM data...
	I0729 05:00:11.007612    9610 main.go:141] libmachine: Parsing certificate...
	I0729 05:00:11.008135    9610 cli_runner.go:164] Run: docker network inspect docker-flags-043000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 05:00:11.025669    9610 cli_runner.go:211] docker network inspect docker-flags-043000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 05:00:11.025796    9610 network_create.go:284] running [docker network inspect docker-flags-043000] to gather additional debugging logs...
	I0729 05:00:11.025815    9610 cli_runner.go:164] Run: docker network inspect docker-flags-043000
	W0729 05:00:11.042937    9610 cli_runner.go:211] docker network inspect docker-flags-043000 returned with exit code 1
	I0729 05:00:11.042966    9610 network_create.go:287] error running [docker network inspect docker-flags-043000]: docker network inspect docker-flags-043000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-043000 not found
	I0729 05:00:11.042987    9610 network_create.go:289] output of [docker network inspect docker-flags-043000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-043000 not found
	
	** /stderr **
	I0729 05:00:11.043103    9610 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 05:00:11.062223    9610 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:00:11.063814    9610 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:00:11.065192    9610 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:00:11.065545    9610 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014f8ac0}
	I0729 05:00:11.065575    9610 network_create.go:124] attempt to create docker network docker-flags-043000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0729 05:00:11.065650    9610 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-043000 docker-flags-043000
	W0729 05:00:11.083291    9610 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-043000 docker-flags-043000 returned with exit code 1
	W0729 05:00:11.083341    9610 network_create.go:149] failed to create docker network docker-flags-043000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-043000 docker-flags-043000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0729 05:00:11.083367    9610 network_create.go:116] failed to create docker network docker-flags-043000 192.168.76.0/24, will retry: subnet is taken
	I0729 05:00:11.084808    9610 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:00:11.085169    9610 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014f9a40}
	I0729 05:00:11.085182    9610 network_create.go:124] attempt to create docker network docker-flags-043000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0729 05:00:11.085246    9610 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-043000 docker-flags-043000
	I0729 05:00:11.148524    9610 network_create.go:108] docker network docker-flags-043000 192.168.85.0/24 created
	I0729 05:00:11.148563    9610 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-043000" container
	I0729 05:00:11.148672    9610 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 05:00:11.167907    9610 cli_runner.go:164] Run: docker volume create docker-flags-043000 --label name.minikube.sigs.k8s.io=docker-flags-043000 --label created_by.minikube.sigs.k8s.io=true
	I0729 05:00:11.186452    9610 oci.go:103] Successfully created a docker volume docker-flags-043000
	I0729 05:00:11.186599    9610 cli_runner.go:164] Run: docker run --rm --name docker-flags-043000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-043000 --entrypoint /usr/bin/test -v docker-flags-043000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 05:00:11.616377    9610 oci.go:107] Successfully prepared a docker volume docker-flags-043000
	I0729 05:00:11.616492    9610 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:00:11.616544    9610 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 05:00:11.616650    9610 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-043000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 05:06:11.006756    9610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 05:06:11.006899    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:06:11.026100    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:06:11.026225    9610 retry.go:31] will retry after 153.898219ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:11.182076    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:06:11.202219    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:06:11.202326    9610 retry.go:31] will retry after 344.833265ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:11.549239    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:06:11.568867    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:06:11.568958    9610 retry.go:31] will retry after 787.878871ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:12.357289    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:06:12.377035    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	W0729 05:06:12.377140    9610 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	
	W0729 05:06:12.377162    9610 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:12.377220    9610 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 05:06:12.377276    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:06:12.395066    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:06:12.395154    9610 retry.go:31] will retry after 230.416874ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:12.627997    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:06:12.647961    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:06:12.648062    9610 retry.go:31] will retry after 539.926532ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:13.189152    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:06:13.209097    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:06:13.209193    9610 retry.go:31] will retry after 612.016089ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:13.822488    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:06:13.842599    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	W0729 05:06:13.842698    9610 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	
	W0729 05:06:13.842719    9610 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:13.842727    9610 start.go:128] duration metric: took 6m2.87892038s to createHost
	I0729 05:06:13.842734    9610 start.go:83] releasing machines lock for "docker-flags-043000", held for 6m2.879024336s
	W0729 05:06:13.842750    9610 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0729 05:06:13.843198    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:13.860161    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	I0729 05:06:13.860215    9610 delete.go:82] Unable to get host status for docker-flags-043000, assuming it has already been deleted: state: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	W0729 05:06:13.860308    9610 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0729 05:06:13.860319    9610 start.go:729] Will try again in 5 seconds ...
	I0729 05:06:18.860690    9610 start.go:360] acquireMachinesLock for docker-flags-043000: {Name:mkd1fa98391682e969c9c3a610b62de7993a3697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:06:18.861546    9610 start.go:364] duration metric: took 801.371µs to acquireMachinesLock for "docker-flags-043000"
	I0729 05:06:18.861730    9610 start.go:96] Skipping create...Using existing machine configuration
	I0729 05:06:18.861747    9610 fix.go:54] fixHost starting: 
	I0729 05:06:18.862221    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:18.881510    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	I0729 05:06:18.881563    9610 fix.go:112] recreateIfNeeded on docker-flags-043000: state= err=unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:18.881588    9610 fix.go:117] machineExists: false. err=machine does not exist
	I0729 05:06:18.902718    9610 out.go:177] * docker "docker-flags-043000" container is missing, will recreate.
	I0729 05:06:18.944681    9610 delete.go:124] DEMOLISHING docker-flags-043000 ...
	I0729 05:06:18.944858    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:18.963870    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	W0729 05:06:18.963935    9610 stop.go:83] unable to get state: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:18.963951    9610 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:18.964333    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:18.981856    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	I0729 05:06:18.981919    9610 delete.go:82] Unable to get host status for docker-flags-043000, assuming it has already been deleted: state: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:18.982006    9610 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-043000
	W0729 05:06:18.999157    9610 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-043000 returned with exit code 1
	I0729 05:06:18.999198    9610 kic.go:371] could not find the container docker-flags-043000 to remove it. will try anyways
	I0729 05:06:18.999273    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:19.016386    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	W0729 05:06:19.016446    9610 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:19.016536    9610 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-043000 /bin/bash -c "sudo init 0"
	W0729 05:06:19.033731    9610 cli_runner.go:211] docker exec --privileged -t docker-flags-043000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 05:06:19.033773    9610 oci.go:650] error shutdown docker-flags-043000: docker exec --privileged -t docker-flags-043000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:20.035184    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:20.054900    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	I0729 05:06:20.054956    9610 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:20.054970    9610 oci.go:664] temporary error: container docker-flags-043000 status is  but expect it to be exited
	I0729 05:06:20.054994    9610 retry.go:31] will retry after 406.271538ms: couldn't verify container is exited. %v: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:20.462809    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:20.481875    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	I0729 05:06:20.481932    9610 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:20.481950    9610 oci.go:664] temporary error: container docker-flags-043000 status is  but expect it to be exited
	I0729 05:06:20.481974    9610 retry.go:31] will retry after 1.114957176s: couldn't verify container is exited. %v: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:21.597229    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:21.616181    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	I0729 05:06:21.616236    9610 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:21.616248    9610 oci.go:664] temporary error: container docker-flags-043000 status is  but expect it to be exited
	I0729 05:06:21.616272    9610 retry.go:31] will retry after 1.530826519s: couldn't verify container is exited. %v: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:23.148793    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:23.167973    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	I0729 05:06:23.168020    9610 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:23.168029    9610 oci.go:664] temporary error: container docker-flags-043000 status is  but expect it to be exited
	I0729 05:06:23.168054    9610 retry.go:31] will retry after 1.18070099s: couldn't verify container is exited. %v: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:24.350068    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:24.369973    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	I0729 05:06:24.370022    9610 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:24.370035    9610 oci.go:664] temporary error: container docker-flags-043000 status is  but expect it to be exited
	I0729 05:06:24.370058    9610 retry.go:31] will retry after 2.954827286s: couldn't verify container is exited. %v: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:27.325151    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:27.343678    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	I0729 05:06:27.343727    9610 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:27.343737    9610 oci.go:664] temporary error: container docker-flags-043000 status is  but expect it to be exited
	I0729 05:06:27.343765    9610 retry.go:31] will retry after 4.780269765s: couldn't verify container is exited. %v: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:32.124274    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:32.143503    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	I0729 05:06:32.143560    9610 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:32.143569    9610 oci.go:664] temporary error: container docker-flags-043000 status is  but expect it to be exited
	I0729 05:06:32.143593    9610 retry.go:31] will retry after 5.241463249s: couldn't verify container is exited. %v: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:37.387424    9610 cli_runner.go:164] Run: docker container inspect docker-flags-043000 --format={{.State.Status}}
	W0729 05:06:37.406924    9610 cli_runner.go:211] docker container inspect docker-flags-043000 --format={{.State.Status}} returned with exit code 1
	I0729 05:06:37.406970    9610 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:06:37.406981    9610 oci.go:664] temporary error: container docker-flags-043000 status is  but expect it to be exited
	I0729 05:06:37.407015    9610 oci.go:88] couldn't shut down docker-flags-043000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	 
	I0729 05:06:37.407090    9610 cli_runner.go:164] Run: docker rm -f -v docker-flags-043000
	I0729 05:06:37.426017    9610 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-043000
	W0729 05:06:37.444056    9610 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-043000 returned with exit code 1
	I0729 05:06:37.444172    9610 cli_runner.go:164] Run: docker network inspect docker-flags-043000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 05:06:37.461642    9610 cli_runner.go:164] Run: docker network rm docker-flags-043000
	I0729 05:06:37.545278    9610 fix.go:124] Sleeping 1 second for extra luck!
	I0729 05:06:38.547484    9610 start.go:125] createHost starting for "" (driver="docker")
	I0729 05:06:38.569483    9610 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 05:06:38.569669    9610 start.go:159] libmachine.API.Create for "docker-flags-043000" (driver="docker")
	I0729 05:06:38.569695    9610 client.go:168] LocalClient.Create starting
	I0729 05:06:38.569924    9610 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 05:06:38.570032    9610 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:38.570060    9610 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:38.570142    9610 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 05:06:38.570225    9610 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:38.570240    9610 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:38.591167    9610 cli_runner.go:164] Run: docker network inspect docker-flags-043000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 05:06:38.610141    9610 cli_runner.go:211] docker network inspect docker-flags-043000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 05:06:38.610237    9610 network_create.go:284] running [docker network inspect docker-flags-043000] to gather additional debugging logs...
	I0729 05:06:38.610260    9610 cli_runner.go:164] Run: docker network inspect docker-flags-043000
	W0729 05:06:38.627199    9610 cli_runner.go:211] docker network inspect docker-flags-043000 returned with exit code 1
	I0729 05:06:38.627232    9610 network_create.go:287] error running [docker network inspect docker-flags-043000]: docker network inspect docker-flags-043000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-043000 not found
	I0729 05:06:38.627247    9610 network_create.go:289] output of [docker network inspect docker-flags-043000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-043000 not found
	
	** /stderr **
	I0729 05:06:38.627372    9610 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 05:06:38.646327    9610 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:06:38.647990    9610 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:06:38.649814    9610 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:06:38.651592    9610 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:06:38.653374    9610 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:06:38.655240    9610 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:06:38.655888    9610 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000099b50}
	I0729 05:06:38.655932    9610 network_create.go:124] attempt to create docker network docker-flags-043000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0729 05:06:38.656052    9610 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-043000 docker-flags-043000
	I0729 05:06:38.720716    9610 network_create.go:108] docker network docker-flags-043000 192.168.103.0/24 created
	I0729 05:06:38.720768    9610 kic.go:121] calculated static IP "192.168.103.2" for the "docker-flags-043000" container
	I0729 05:06:38.720881    9610 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 05:06:38.740585    9610 cli_runner.go:164] Run: docker volume create docker-flags-043000 --label name.minikube.sigs.k8s.io=docker-flags-043000 --label created_by.minikube.sigs.k8s.io=true
	I0729 05:06:38.757653    9610 oci.go:103] Successfully created a docker volume docker-flags-043000
	I0729 05:06:38.757775    9610 cli_runner.go:164] Run: docker run --rm --name docker-flags-043000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-043000 --entrypoint /usr/bin/test -v docker-flags-043000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 05:06:39.022125    9610 oci.go:107] Successfully prepared a docker volume docker-flags-043000
	I0729 05:06:39.022162    9610 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:06:39.022179    9610 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 05:06:39.022315    9610 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-043000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 05:12:38.622026    9610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 05:12:38.622155    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:38.641901    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:38.642016    9610 retry.go:31] will retry after 335.496904ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:38.979970    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:39.000172    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:39.000284    9610 retry.go:31] will retry after 193.404802ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:39.196105    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:39.215665    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:39.215778    9610 retry.go:31] will retry after 771.955352ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:39.990110    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:40.012648    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	W0729 05:12:40.012750    9610 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	
	W0729 05:12:40.012770    9610 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:40.012836    9610 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 05:12:40.012889    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:40.030415    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:40.030534    9610 retry.go:31] will retry after 240.010418ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:40.270925    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:40.291551    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:40.291649    9610 retry.go:31] will retry after 542.743474ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:40.836781    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:40.855959    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:40.856076    9610 retry.go:31] will retry after 614.96262ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:41.472165    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:41.493524    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	W0729 05:12:41.493641    9610 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	
	W0729 05:12:41.493668    9610 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:41.493688    9610 start.go:128] duration metric: took 6m2.895191844s to createHost
	I0729 05:12:41.493754    9610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 05:12:41.493805    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:41.513500    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:41.513645    9610 retry.go:31] will retry after 199.790778ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:41.715809    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:41.734401    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:41.734498    9610 retry.go:31] will retry after 200.832017ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:41.936864    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:41.956304    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:41.956400    9610 retry.go:31] will retry after 812.901526ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:42.771050    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:42.790851    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:42.790961    9610 retry.go:31] will retry after 470.138813ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:43.261417    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:43.280506    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	W0729 05:12:43.280603    9610 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	
	W0729 05:12:43.280620    9610 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:43.280683    9610 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 05:12:43.280756    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:43.298357    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:43.298450    9610 retry.go:31] will retry after 251.695373ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:43.550396    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:43.569856    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:43.569949    9610 retry.go:31] will retry after 346.619184ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:43.916943    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:43.936862    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	I0729 05:12:43.936954    9610 retry.go:31] will retry after 656.385803ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:44.595770    9610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000
	W0729 05:12:44.615007    9610 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000 returned with exit code 1
	W0729 05:12:44.615108    9610 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	
	W0729 05:12:44.615127    9610 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-043000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-043000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	I0729 05:12:44.615141    9610 fix.go:56] duration metric: took 6m25.702486537s for fixHost
	I0729 05:12:44.615148    9610 start.go:83] releasing machines lock for "docker-flags-043000", held for 6m25.702545813s
	W0729 05:12:44.615219    9610 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-043000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-043000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 05:12:44.659556    9610 out.go:177] 
	W0729 05:12:44.680936    9610 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 05:12:44.680989    9610 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0729 05:12:44.681022    9610 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0729 05:12:44.723597    9610 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-043000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-043000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-043000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (161.490911ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-043000 host status: state: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-043000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-043000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-043000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (158.897707ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node docker-flags-043000 host status: state: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000
	

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-043000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-043000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-29 05:12:45.101522 -0700 PDT m=+6795.385629052
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-043000
helpers_test.go:235: (dbg) docker inspect docker-flags-043000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-043000",
	        "Id": "b37222cfbdeee802f4b0a98e50ef88e900b3c87a9378452c0f3b49a360aacf4e",
	        "Created": "2024-07-29T12:06:38.67242124Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-043000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-043000 -n docker-flags-043000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-043000 -n docker-flags-043000: exit status 7 (72.462806ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 05:12:45.192885    9896 status.go:249] status error: host: state: unknown state "docker-flags-043000": docker container inspect docker-flags-043000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-043000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-043000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-043000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-043000
--- FAIL: TestDockerFlags (755.49s)

                                                
                                    
x
+
TestForceSystemdFlag (757.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-490000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-490000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m36.405054965s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-490000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-490000" primary control-plane node in "force-systemd-flag-490000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-490000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:59:33.505329    9269 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:59:33.505518    9269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:59:33.505524    9269 out.go:304] Setting ErrFile to fd 2...
	I0729 04:59:33.505528    9269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:59:33.505714    9269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:59:33.507238    9269 out.go:298] Setting JSON to false
	I0729 04:59:33.529948    9269 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7143,"bootTime":1722247230,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 04:59:33.530049    9269 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:59:33.551986    9269 out.go:177] * [force-systemd-flag-490000] minikube v1.33.1 on Darwin 14.5
	I0729 04:59:33.595418    9269 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 04:59:33.595448    9269 notify.go:220] Checking for updates...
	I0729 04:59:33.638292    9269 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	I0729 04:59:33.659220    9269 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 04:59:33.680518    9269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:59:33.701564    9269 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	I0729 04:59:33.722303    9269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:59:33.744589    9269 config.go:182] Loaded profile config "force-systemd-env-474000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:59:33.744823    9269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:59:33.769577    9269 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 04:59:33.769747    9269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 04:59:33.848878    9269 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:110 OomKillDisable:false NGoroutines:218 SystemTime:2024-07-29 11:59:33.840110793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddres
s:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.
13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docke
r-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-
plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 04:59:33.870701    9269 out.go:177] * Using the docker driver based on user configuration
	I0729 04:59:33.912718    9269 start.go:297] selected driver: docker
	I0729 04:59:33.912746    9269 start.go:901] validating driver "docker" against <nil>
	I0729 04:59:33.912762    9269 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:59:33.917076    9269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 04:59:33.996783    9269 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:110 OomKillDisable:false NGoroutines:218 SystemTime:2024-07-29 11:59:33.98814136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress
:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.1
3-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker
-desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-p
lugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 04:59:33.996968    9269 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:59:33.997158    9269 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:59:34.018699    9269 out.go:177] * Using Docker Desktop driver with root privileges
	I0729 04:59:34.039663    9269 cni.go:84] Creating CNI manager for ""
	I0729 04:59:34.039703    9269 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:59:34.039722    9269 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:59:34.039814    9269 start.go:340] cluster config:
	{Name:force-systemd-flag-490000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-490000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:59:34.061674    9269 out.go:177] * Starting "force-systemd-flag-490000" primary control-plane node in "force-systemd-flag-490000" cluster
	I0729 04:59:34.103487    9269 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 04:59:34.124697    9269 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 04:59:34.166526    9269 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:59:34.166607    9269 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 04:59:34.166625    9269 cache.go:56] Caching tarball of preloaded images
	I0729 04:59:34.166629    9269 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 04:59:34.166890    9269 preload.go:172] Found /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 04:59:34.166909    9269 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:59:34.167080    9269 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/force-systemd-flag-490000/config.json ...
	I0729 04:59:34.167681    9269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/force-systemd-flag-490000/config.json: {Name:mkfa8a772659ebaa32298f7881460065018ff856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0729 04:59:34.192233    9269 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 04:59:34.192258    9269 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 04:59:34.192414    9269 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 04:59:34.192438    9269 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 04:59:34.192450    9269 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 04:59:34.192461    9269 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 04:59:34.192466    9269 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 04:59:34.195622    9269 image.go:273] response: 
	I0729 04:59:34.328050    9269 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 04:59:34.328093    9269 cache.go:194] Successfully downloaded all kic artifacts
	I0729 04:59:34.328143    9269 start.go:360] acquireMachinesLock for force-systemd-flag-490000: {Name:mk2303786ef192b772f22f6e0f4c1c54630aa590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:59:34.328323    9269 start.go:364] duration metric: took 168.177µs to acquireMachinesLock for "force-systemd-flag-490000"
	I0729 04:59:34.328353    9269 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-490000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-490000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:59:34.328412    9269 start.go:125] createHost starting for "" (driver="docker")
	I0729 04:59:34.369823    9269 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 04:59:34.370026    9269 start.go:159] libmachine.API.Create for "force-systemd-flag-490000" (driver="docker")
	I0729 04:59:34.370054    9269 client.go:168] LocalClient.Create starting
	I0729 04:59:34.370163    9269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 04:59:34.370215    9269 main.go:141] libmachine: Decoding PEM data...
	I0729 04:59:34.370234    9269 main.go:141] libmachine: Parsing certificate...
	I0729 04:59:34.370285    9269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 04:59:34.370324    9269 main.go:141] libmachine: Decoding PEM data...
	I0729 04:59:34.370332    9269 main.go:141] libmachine: Parsing certificate...
	I0729 04:59:34.370879    9269 cli_runner.go:164] Run: docker network inspect force-systemd-flag-490000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 04:59:34.388169    9269 cli_runner.go:211] docker network inspect force-systemd-flag-490000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 04:59:34.388287    9269 network_create.go:284] running [docker network inspect force-systemd-flag-490000] to gather additional debugging logs...
	I0729 04:59:34.388313    9269 cli_runner.go:164] Run: docker network inspect force-systemd-flag-490000
	W0729 04:59:34.405710    9269 cli_runner.go:211] docker network inspect force-systemd-flag-490000 returned with exit code 1
	I0729 04:59:34.405751    9269 network_create.go:287] error running [docker network inspect force-systemd-flag-490000]: docker network inspect force-systemd-flag-490000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-490000 not found
	I0729 04:59:34.405764    9269 network_create.go:289] output of [docker network inspect force-systemd-flag-490000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-490000 not found
	
	** /stderr **
	I0729 04:59:34.405877    9269 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:59:34.424959    9269 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:59:34.426354    9269 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:59:34.426712    9269 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014e6c00}
	I0729 04:59:34.426730    9269 network_create.go:124] attempt to create docker network force-systemd-flag-490000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0729 04:59:34.426801    9269 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-490000 force-systemd-flag-490000
	I0729 04:59:34.490156    9269 network_create.go:108] docker network force-systemd-flag-490000 192.168.67.0/24 created
	I0729 04:59:34.490193    9269 kic.go:121] calculated static IP "192.168.67.2" for the "force-systemd-flag-490000" container
	I0729 04:59:34.490310    9269 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 04:59:34.510909    9269 cli_runner.go:164] Run: docker volume create force-systemd-flag-490000 --label name.minikube.sigs.k8s.io=force-systemd-flag-490000 --label created_by.minikube.sigs.k8s.io=true
	I0729 04:59:34.529331    9269 oci.go:103] Successfully created a docker volume force-systemd-flag-490000
	I0729 04:59:34.529449    9269 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-490000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-490000 --entrypoint /usr/bin/test -v force-systemd-flag-490000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 04:59:34.938831    9269 oci.go:107] Successfully prepared a docker volume force-systemd-flag-490000
	I0729 04:59:34.938879    9269 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:59:34.938895    9269 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 04:59:34.939017    9269 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-490000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 05:05:34.369954    9269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 05:05:34.370100    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:05:34.389915    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:05:34.390043    9269 retry.go:31] will retry after 337.127636ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:34.727702    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:05:34.748406    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:05:34.748495    9269 retry.go:31] will retry after 335.974408ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:35.086865    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:05:35.106433    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:05:35.106525    9269 retry.go:31] will retry after 375.191454ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:35.482431    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:05:35.502748    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:05:35.502866    9269 retry.go:31] will retry after 477.411191ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:35.982629    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:05:36.002524    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	W0729 05:05:36.002627    9269 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	
	W0729 05:05:36.002647    9269 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:36.002715    9269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 05:05:36.002776    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:05:36.020947    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:05:36.021061    9269 retry.go:31] will retry after 141.27289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:36.164722    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:05:36.184932    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:05:36.185027    9269 retry.go:31] will retry after 353.541805ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:36.539539    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:05:36.559316    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:05:36.559419    9269 retry.go:31] will retry after 527.564126ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:37.087364    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:05:37.108043    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:05:37.108133    9269 retry.go:31] will retry after 608.316089ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:37.718109    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:05:37.738016    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	W0729 05:05:37.738117    9269 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	
	W0729 05:05:37.738137    9269 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:37.738150    9269 start.go:128] duration metric: took 6m3.410616846s to createHost
	I0729 05:05:37.738165    9269 start.go:83] releasing machines lock for "force-systemd-flag-490000", held for 6m3.410724472s
	W0729 05:05:37.738182    9269 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0729 05:05:37.738628    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:05:37.755657    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	I0729 05:05:37.755718    9269 delete.go:82] Unable to get host status for force-systemd-flag-490000, assuming it has already been deleted: state: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	W0729 05:05:37.755818    9269 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0729 05:05:37.755827    9269 start.go:729] Will try again in 5 seconds ...
	I0729 05:05:42.758047    9269 start.go:360] acquireMachinesLock for force-systemd-flag-490000: {Name:mk2303786ef192b772f22f6e0f4c1c54630aa590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 05:05:42.758310    9269 start.go:364] duration metric: took 217.483µs to acquireMachinesLock for "force-systemd-flag-490000"
	I0729 05:05:42.758351    9269 start.go:96] Skipping create...Using existing machine configuration
	I0729 05:05:42.758366    9269 fix.go:54] fixHost starting: 
	I0729 05:05:42.758783    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:05:42.778171    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	I0729 05:05:42.778229    9269 fix.go:112] recreateIfNeeded on force-systemd-flag-490000: state= err=unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:42.778250    9269 fix.go:117] machineExists: false. err=machine does not exist
	I0729 05:05:42.800316    9269 out.go:177] * docker "force-systemd-flag-490000" container is missing, will recreate.
	I0729 05:05:42.821661    9269 delete.go:124] DEMOLISHING force-systemd-flag-490000 ...
	I0729 05:05:42.821876    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:05:42.841310    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	W0729 05:05:42.841360    9269 stop.go:83] unable to get state: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:42.841378    9269 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:42.841767    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:05:42.858456    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	I0729 05:05:42.858508    9269 delete.go:82] Unable to get host status for force-systemd-flag-490000, assuming it has already been deleted: state: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:42.858600    9269 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-490000
	W0729 05:05:42.875875    9269 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-490000 returned with exit code 1
	I0729 05:05:42.875917    9269 kic.go:371] could not find the container force-systemd-flag-490000 to remove it. will try anyways
	I0729 05:05:42.876003    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:05:42.893196    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	W0729 05:05:42.893248    9269 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:42.893330    9269 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-490000 /bin/bash -c "sudo init 0"
	W0729 05:05:42.910421    9269 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-490000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 05:05:42.910459    9269 oci.go:650] error shutdown force-systemd-flag-490000: docker exec --privileged -t force-systemd-flag-490000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:43.912892    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:05:43.933052    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	I0729 05:05:43.933106    9269 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:43.933117    9269 oci.go:664] temporary error: container force-systemd-flag-490000 status is  but expect it to be exited
	I0729 05:05:43.933143    9269 retry.go:31] will retry after 416.928316ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:44.352145    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:05:44.371606    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	I0729 05:05:44.371654    9269 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:44.371664    9269 oci.go:664] temporary error: container force-systemd-flag-490000 status is  but expect it to be exited
	I0729 05:05:44.371690    9269 retry.go:31] will retry after 456.847461ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:44.830073    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:05:44.850752    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	I0729 05:05:44.850814    9269 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:44.850824    9269 oci.go:664] temporary error: container force-systemd-flag-490000 status is  but expect it to be exited
	I0729 05:05:44.850847    9269 retry.go:31] will retry after 1.619038434s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:46.472253    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:05:46.491557    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	I0729 05:05:46.491609    9269 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:46.491619    9269 oci.go:664] temporary error: container force-systemd-flag-490000 status is  but expect it to be exited
	I0729 05:05:46.491643    9269 retry.go:31] will retry after 1.240389858s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:47.734294    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:05:47.762981    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	I0729 05:05:47.763042    9269 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:47.763054    9269 oci.go:664] temporary error: container force-systemd-flag-490000 status is  but expect it to be exited
	I0729 05:05:47.763081    9269 retry.go:31] will retry after 2.641658663s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:50.405067    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:05:50.424506    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	I0729 05:05:50.424559    9269 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:50.424569    9269 oci.go:664] temporary error: container force-systemd-flag-490000 status is  but expect it to be exited
	I0729 05:05:50.424597    9269 retry.go:31] will retry after 4.403523052s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:54.830543    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:05:54.850823    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	I0729 05:05:54.850919    9269 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:05:54.850937    9269 oci.go:664] temporary error: container force-systemd-flag-490000 status is  but expect it to be exited
	I0729 05:05:54.850962    9269 retry.go:31] will retry after 7.428213817s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:06:02.279518    9269 cli_runner.go:164] Run: docker container inspect force-systemd-flag-490000 --format={{.State.Status}}
	W0729 05:06:02.298811    9269 cli_runner.go:211] docker container inspect force-systemd-flag-490000 --format={{.State.Status}} returned with exit code 1
	I0729 05:06:02.298865    9269 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:06:02.298876    9269 oci.go:664] temporary error: container force-systemd-flag-490000 status is  but expect it to be exited
	I0729 05:06:02.298910    9269 oci.go:88] couldn't shut down force-systemd-flag-490000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	 
	I0729 05:06:02.298983    9269 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-490000
	I0729 05:06:02.317127    9269 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-490000
	W0729 05:06:02.334418    9269 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-490000 returned with exit code 1
	I0729 05:06:02.334551    9269 cli_runner.go:164] Run: docker network inspect force-systemd-flag-490000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 05:06:02.352244    9269 cli_runner.go:164] Run: docker network rm force-systemd-flag-490000
	I0729 05:06:02.434432    9269 fix.go:124] Sleeping 1 second for extra luck!
	I0729 05:06:03.436617    9269 start.go:125] createHost starting for "" (driver="docker")
	I0729 05:06:03.461526    9269 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 05:06:03.461749    9269 start.go:159] libmachine.API.Create for "force-systemd-flag-490000" (driver="docker")
	I0729 05:06:03.461776    9269 client.go:168] LocalClient.Create starting
	I0729 05:06:03.461944    9269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 05:06:03.462019    9269 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:03.462037    9269 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:03.462098    9269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 05:06:03.462153    9269 main.go:141] libmachine: Decoding PEM data...
	I0729 05:06:03.462163    9269 main.go:141] libmachine: Parsing certificate...
	I0729 05:06:03.462688    9269 cli_runner.go:164] Run: docker network inspect force-systemd-flag-490000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 05:06:03.481305    9269 cli_runner.go:211] docker network inspect force-systemd-flag-490000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 05:06:03.481399    9269 network_create.go:284] running [docker network inspect force-systemd-flag-490000] to gather additional debugging logs...
	I0729 05:06:03.481417    9269 cli_runner.go:164] Run: docker network inspect force-systemd-flag-490000
	W0729 05:06:03.498942    9269 cli_runner.go:211] docker network inspect force-systemd-flag-490000 returned with exit code 1
	I0729 05:06:03.498974    9269 network_create.go:287] error running [docker network inspect force-systemd-flag-490000]: docker network inspect force-systemd-flag-490000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-490000 not found
	I0729 05:06:03.498992    9269 network_create.go:289] output of [docker network inspect force-systemd-flag-490000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-490000 not found
	
	** /stderr **
	I0729 05:06:03.499137    9269 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 05:06:03.532917    9269 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:06:03.534436    9269 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:06:03.536277    9269 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:06:03.538225    9269 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:06:03.540131    9269 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 05:06:03.540833    9269 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013ee8f0}
	I0729 05:06:03.540855    9269 network_create.go:124] attempt to create docker network force-systemd-flag-490000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0729 05:06:03.540994    9269 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-490000 force-systemd-flag-490000
	I0729 05:06:03.606628    9269 network_create.go:108] docker network force-systemd-flag-490000 192.168.94.0/24 created
	I0729 05:06:03.606668    9269 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-flag-490000" container
	I0729 05:06:03.606772    9269 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 05:06:03.626294    9269 cli_runner.go:164] Run: docker volume create force-systemd-flag-490000 --label name.minikube.sigs.k8s.io=force-systemd-flag-490000 --label created_by.minikube.sigs.k8s.io=true
	I0729 05:06:03.643484    9269 oci.go:103] Successfully created a docker volume force-systemd-flag-490000
	I0729 05:06:03.643592    9269 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-490000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-490000 --entrypoint /usr/bin/test -v force-systemd-flag-490000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 05:06:03.894570    9269 oci.go:107] Successfully prepared a docker volume force-systemd-flag-490000
	I0729 05:06:03.894626    9269 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 05:06:03.894644    9269 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 05:06:03.894785    9269 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-490000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 05:12:03.463213    9269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 05:12:03.463337    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:03.483339    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:03.483456    9269 retry.go:31] will retry after 240.698545ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:03.726593    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:03.746817    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:03.746917    9269 retry.go:31] will retry after 394.59278ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:04.143931    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:04.163495    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:04.163593    9269 retry.go:31] will retry after 369.881042ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:04.535581    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:04.555543    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:04.555642    9269 retry.go:31] will retry after 431.257097ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:04.989113    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:05.009160    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	W0729 05:12:05.009287    9269 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	
	W0729 05:12:05.009314    9269 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:05.009374    9269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 05:12:05.009440    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:05.027086    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:05.027175    9269 retry.go:31] will retry after 353.862636ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:05.383101    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:05.402052    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:05.402148    9269 retry.go:31] will retry after 418.870413ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:05.821324    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:05.840550    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:05.840652    9269 retry.go:31] will retry after 285.402691ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:06.128434    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:06.148646    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:06.148742    9269 retry.go:31] will retry after 686.595616ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:06.835923    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:06.856365    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	W0729 05:12:06.856473    9269 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	
	W0729 05:12:06.856489    9269 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:06.856499    9269 start.go:128] duration metric: took 6m3.420726122s to createHost
	I0729 05:12:06.856581    9269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 05:12:06.856639    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:06.874238    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:06.874326    9269 retry.go:31] will retry after 157.478941ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:07.034267    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:07.052650    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:07.052753    9269 retry.go:31] will retry after 458.454658ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:07.511979    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:07.531371    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:07.531466    9269 retry.go:31] will retry after 375.977869ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:07.909829    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:07.929793    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:07.929897    9269 retry.go:31] will retry after 584.104987ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:08.515338    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:08.535347    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	W0729 05:12:08.535463    9269 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	
	W0729 05:12:08.535478    9269 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:08.535544    9269 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 05:12:08.535598    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:08.553270    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:08.553365    9269 retry.go:31] will retry after 253.209806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:08.808799    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:08.828378    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:08.828477    9269 retry.go:31] will retry after 512.395121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:09.341284    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:09.361098    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	I0729 05:12:09.361195    9269 retry.go:31] will retry after 324.883028ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:09.686380    9269 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000
	W0729 05:12:09.704464    9269 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000 returned with exit code 1
	W0729 05:12:09.704563    9269 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	
	W0729 05:12:09.704580    9269 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-490000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-490000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	I0729 05:12:09.704597    9269 fix.go:56] duration metric: took 6m26.947180801s for fixHost
	I0729 05:12:09.704606    9269 start.go:83] releasing machines lock for "force-systemd-flag-490000", held for 6m26.947231169s
	W0729 05:12:09.704689    9269 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-490000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-490000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 05:12:09.748412    9269 out.go:177] 
	W0729 05:12:09.770291    9269 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 05:12:09.770340    9269 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0729 05:12:09.770380    9269 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0729 05:12:09.792421    9269 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-490000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-490000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-490000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (162.10755ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-490000 host status: state: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-490000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-29 05:12:10.030248 -0700 PDT m=+6760.366120106
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-490000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-490000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-490000",
	        "Id": "a4e52d4db57c8d0ebad6cfbdc04bb7014cd2cffd71f2e339012f0b2cfe7817e2",
	        "Created": "2024-07-29T12:06:03.556702183Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-490000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-490000 -n force-systemd-flag-490000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-490000 -n force-systemd-flag-490000: exit status 7 (73.302322ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 05:12:10.122862    9815 status.go:249] status error: host: state: unknown state "force-systemd-flag-490000": docker container inspect force-systemd-flag-490000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-490000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-490000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-490000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-490000
--- FAIL: TestForceSystemdFlag (757.11s)

                                                
                                    
x
+
TestForceSystemdEnv (758.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-474000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0729 04:48:53.851836    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:49:14.399612    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 04:51:56.912789    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:53:53.851004    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:54:14.400857    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 04:57:17.455672    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 04:58:53.850574    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:59:14.399502    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-474000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m37.39272399s)

                                                
                                                
-- stdout --
	* [force-systemd-env-474000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-env-474000" primary control-plane node in "force-systemd-env-474000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-474000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:47:32.050396    9052 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:47:32.050579    9052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:47:32.050585    9052 out.go:304] Setting ErrFile to fd 2...
	I0729 04:47:32.050589    9052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:47:32.050765    9052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:47:32.052277    9052 out.go:298] Setting JSON to false
	I0729 04:47:32.074716    9052 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6422,"bootTime":1722247230,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 04:47:32.074806    9052 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:47:32.096624    9052 out.go:177] * [force-systemd-env-474000] minikube v1.33.1 on Darwin 14.5
	I0729 04:47:32.138578    9052 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 04:47:32.138673    9052 notify.go:220] Checking for updates...
	I0729 04:47:32.182129    9052 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	I0729 04:47:32.203353    9052 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 04:47:32.224414    9052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:47:32.245400    9052 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	I0729 04:47:32.266407    9052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0729 04:47:32.288251    9052 config.go:182] Loaded profile config "offline-docker-789000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:47:32.288390    9052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:47:32.312385    9052 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 04:47:32.312642    9052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 04:47:32.391091    9052 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:false NGoroutines:182 SystemTime:2024-07-29 11:47:32.381907246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13
-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-
desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-pl
ugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 04:47:32.433696    9052 out.go:177] * Using the docker driver based on user configuration
	I0729 04:47:32.454764    9052 start.go:297] selected driver: docker
	I0729 04:47:32.454786    9052 start.go:901] validating driver "docker" against <nil>
	I0729 04:47:32.454800    9052 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:47:32.459464    9052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 04:47:32.537411    9052 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:false NGoroutines:182 SystemTime:2024-07-29 11:47:32.528546587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13
-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-
desktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-pl
ugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 04:47:32.537589    9052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:47:32.537761    9052 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 04:47:32.558519    9052 out.go:177] * Using Docker Desktop driver with root privileges
	I0729 04:47:32.579618    9052 cni.go:84] Creating CNI manager for ""
	I0729 04:47:32.579634    9052 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 04:47:32.579650    9052 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 04:47:32.579699    9052 start.go:340] cluster config:
	{Name:force-systemd-env-474000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-474000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:47:32.600630    9052 out.go:177] * Starting "force-systemd-env-474000" primary control-plane node in "force-systemd-env-474000" cluster
	I0729 04:47:32.621463    9052 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 04:47:32.642826    9052 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 04:47:32.684713    9052 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:47:32.684759    9052 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 04:47:32.684789    9052 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 04:47:32.684818    9052 cache.go:56] Caching tarball of preloaded images
	I0729 04:47:32.685053    9052 preload.go:172] Found /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 04:47:32.685073    9052 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:47:32.685854    9052 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/force-systemd-env-474000/config.json ...
	I0729 04:47:32.686072    9052 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/force-systemd-env-474000/config.json: {Name:mkd666ff9a8b00c2182797cadf326486680afa09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0729 04:47:32.710179    9052 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 04:47:32.710214    9052 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 04:47:32.710334    9052 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 04:47:32.710352    9052 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 04:47:32.710358    9052 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 04:47:32.710366    9052 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 04:47:32.710371    9052 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 04:47:32.713504    9052 image.go:273] response: 
	I0729 04:47:32.838020    9052 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 04:47:32.838094    9052 cache.go:194] Successfully downloaded all kic artifacts
	I0729 04:47:32.838144    9052 start.go:360] acquireMachinesLock for force-systemd-env-474000: {Name:mkf6dc92452d30b4a51338867910717f658a71cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:47:32.838315    9052 start.go:364] duration metric: took 158.677µs to acquireMachinesLock for "force-systemd-env-474000"
	I0729 04:47:32.838346    9052 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-474000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-474000 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:47:32.838415    9052 start.go:125] createHost starting for "" (driver="docker")
	I0729 04:47:32.881327    9052 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 04:47:32.881543    9052 start.go:159] libmachine.API.Create for "force-systemd-env-474000" (driver="docker")
	I0729 04:47:32.881570    9052 client.go:168] LocalClient.Create starting
	I0729 04:47:32.881675    9052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 04:47:32.881729    9052 main.go:141] libmachine: Decoding PEM data...
	I0729 04:47:32.881745    9052 main.go:141] libmachine: Parsing certificate...
	I0729 04:47:32.881793    9052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 04:47:32.881831    9052 main.go:141] libmachine: Decoding PEM data...
	I0729 04:47:32.881839    9052 main.go:141] libmachine: Parsing certificate...
	I0729 04:47:32.882296    9052 cli_runner.go:164] Run: docker network inspect force-systemd-env-474000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 04:47:32.899885    9052 cli_runner.go:211] docker network inspect force-systemd-env-474000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 04:47:32.899994    9052 network_create.go:284] running [docker network inspect force-systemd-env-474000] to gather additional debugging logs...
	I0729 04:47:32.900009    9052 cli_runner.go:164] Run: docker network inspect force-systemd-env-474000
	W0729 04:47:32.917265    9052 cli_runner.go:211] docker network inspect force-systemd-env-474000 returned with exit code 1
	I0729 04:47:32.917290    9052 network_create.go:287] error running [docker network inspect force-systemd-env-474000]: docker network inspect force-systemd-env-474000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-474000 not found
	I0729 04:47:32.917300    9052 network_create.go:289] output of [docker network inspect force-systemd-env-474000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-474000 not found
	
	** /stderr **
	I0729 04:47:32.917454    9052 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:47:32.936122    9052 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:47:32.937739    9052 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:47:32.939131    9052 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:47:32.939481    9052 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014e4b10}
	I0729 04:47:32.939496    9052 network_create.go:124] attempt to create docker network force-systemd-env-474000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0729 04:47:32.939566    9052 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-474000 force-systemd-env-474000
	W0729 04:47:32.957204    9052 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-474000 force-systemd-env-474000 returned with exit code 1
	W0729 04:47:32.957242    9052 network_create.go:149] failed to create docker network force-systemd-env-474000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-474000 force-systemd-env-474000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0729 04:47:32.957259    9052 network_create.go:116] failed to create docker network force-systemd-env-474000 192.168.76.0/24, will retry: subnet is taken
	I0729 04:47:32.958602    9052 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:47:32.958957    9052 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00161b2b0}
	I0729 04:47:32.958969    9052 network_create.go:124] attempt to create docker network force-systemd-env-474000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0729 04:47:32.959046    9052 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-474000 force-systemd-env-474000
	I0729 04:47:33.022208    9052 network_create.go:108] docker network force-systemd-env-474000 192.168.85.0/24 created
	I0729 04:47:33.022253    9052 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-474000" container
	I0729 04:47:33.022355    9052 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 04:47:33.041381    9052 cli_runner.go:164] Run: docker volume create force-systemd-env-474000 --label name.minikube.sigs.k8s.io=force-systemd-env-474000 --label created_by.minikube.sigs.k8s.io=true
	I0729 04:47:33.059922    9052 oci.go:103] Successfully created a docker volume force-systemd-env-474000
	I0729 04:47:33.060041    9052 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-474000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-474000 --entrypoint /usr/bin/test -v force-systemd-env-474000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 04:47:33.485632    9052 oci.go:107] Successfully prepared a docker volume force-systemd-env-474000
	I0729 04:47:33.485705    9052 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:47:33.485723    9052 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 04:47:33.485865    9052 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-474000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 04:53:32.883076    9052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 04:53:32.883231    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 04:53:32.902952    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 04:53:32.903089    9052 retry.go:31] will retry after 314.2293ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:33.219731    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 04:53:33.238463    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 04:53:33.238576    9052 retry.go:31] will retry after 201.529922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:33.442559    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 04:53:33.461968    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 04:53:33.462073    9052 retry.go:31] will retry after 378.493058ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:33.842451    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 04:53:33.862399    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 04:53:33.862491    9052 retry.go:31] will retry after 981.467444ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:34.844344    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 04:53:34.863157    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	W0729 04:53:34.863280    9052 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	
	W0729 04:53:34.863299    9052 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:34.863369    9052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 04:53:34.863438    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 04:53:34.880952    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 04:53:34.881042    9052 retry.go:31] will retry after 243.982568ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:35.127292    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 04:53:35.146432    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 04:53:35.146526    9052 retry.go:31] will retry after 556.938173ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:35.704708    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 04:53:35.724811    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 04:53:35.724900    9052 retry.go:31] will retry after 757.474918ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:36.484834    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 04:53:36.505275    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	W0729 04:53:36.505375    9052 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	
	W0729 04:53:36.505389    9052 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:36.505411    9052 start.go:128] duration metric: took 6m3.667873613s to createHost
	I0729 04:53:36.505418    9052 start.go:83] releasing machines lock for "force-systemd-env-474000", held for 6m3.667986331s
	W0729 04:53:36.505435    9052 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0729 04:53:36.505872    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:36.523827    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:36.523882    9052 delete.go:82] Unable to get host status for force-systemd-env-474000, assuming it has already been deleted: state: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	W0729 04:53:36.523980    9052 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0729 04:53:36.523990    9052 start.go:729] Will try again in 5 seconds ...
	I0729 04:53:41.526245    9052 start.go:360] acquireMachinesLock for force-systemd-env-474000: {Name:mkf6dc92452d30b4a51338867910717f658a71cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:53:41.526449    9052 start.go:364] duration metric: took 162.166µs to acquireMachinesLock for "force-systemd-env-474000"
	I0729 04:53:41.526490    9052 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:53:41.526507    9052 fix.go:54] fixHost starting: 
	I0729 04:53:41.526907    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:41.546312    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:41.546367    9052 fix.go:112] recreateIfNeeded on force-systemd-env-474000: state= err=unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:41.546390    9052 fix.go:117] machineExists: false. err=machine does not exist
	I0729 04:53:41.568125    9052 out.go:177] * docker "force-systemd-env-474000" container is missing, will recreate.
	I0729 04:53:41.590152    9052 delete.go:124] DEMOLISHING force-systemd-env-474000 ...
	I0729 04:53:41.590335    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:41.608816    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	W0729 04:53:41.608876    9052 stop.go:83] unable to get state: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:41.608897    9052 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:41.609285    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:41.626355    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:41.626413    9052 delete.go:82] Unable to get host status for force-systemd-env-474000, assuming it has already been deleted: state: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:41.626494    9052 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-474000
	W0729 04:53:41.643495    9052 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-474000 returned with exit code 1
	I0729 04:53:41.643535    9052 kic.go:371] could not find the container force-systemd-env-474000 to remove it. will try anyways
	I0729 04:53:41.643608    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:41.660495    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	W0729 04:53:41.660553    9052 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:41.660644    9052 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-474000 /bin/bash -c "sudo init 0"
	W0729 04:53:41.677914    9052 cli_runner.go:211] docker exec --privileged -t force-systemd-env-474000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 04:53:41.677947    9052 oci.go:650] error shutdown force-systemd-env-474000: docker exec --privileged -t force-systemd-env-474000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:42.678415    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:42.697292    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:42.697346    9052 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:42.697359    9052 oci.go:664] temporary error: container force-systemd-env-474000 status is  but expect it to be exited
	I0729 04:53:42.697386    9052 retry.go:31] will retry after 457.136941ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:43.155133    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:43.175496    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:43.175552    9052 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:43.175566    9052 oci.go:664] temporary error: container force-systemd-env-474000 status is  but expect it to be exited
	I0729 04:53:43.175590    9052 retry.go:31] will retry after 604.986996ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:43.782548    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:43.801675    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:43.801730    9052 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:43.801744    9052 oci.go:664] temporary error: container force-systemd-env-474000 status is  but expect it to be exited
	I0729 04:53:43.801767    9052 retry.go:31] will retry after 819.681453ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:44.621700    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:44.641901    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:44.641953    9052 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:44.641966    9052 oci.go:664] temporary error: container force-systemd-env-474000 status is  but expect it to be exited
	I0729 04:53:44.641990    9052 retry.go:31] will retry after 2.015869855s: couldn't verify container is exited. %v: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:46.660192    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:46.679544    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:46.679597    9052 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:46.679609    9052 oci.go:664] temporary error: container force-systemd-env-474000 status is  but expect it to be exited
	I0729 04:53:46.679633    9052 retry.go:31] will retry after 2.017768355s: couldn't verify container is exited. %v: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:48.699805    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:48.719712    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:48.719758    9052 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:48.719769    9052 oci.go:664] temporary error: container force-systemd-env-474000 status is  but expect it to be exited
	I0729 04:53:48.719803    9052 retry.go:31] will retry after 2.939070259s: couldn't verify container is exited. %v: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:51.661253    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:51.680815    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:51.680861    9052 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:51.680871    9052 oci.go:664] temporary error: container force-systemd-env-474000 status is  but expect it to be exited
	I0729 04:53:51.680896    9052 retry.go:31] will retry after 2.868821414s: couldn't verify container is exited. %v: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:54.552078    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:53:54.571815    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	I0729 04:53:54.571868    9052 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:53:54.571878    9052 oci.go:664] temporary error: container force-systemd-env-474000 status is  but expect it to be exited
	I0729 04:53:54.571903    9052 retry.go:31] will retry after 7.335148633s: couldn't verify container is exited. %v: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:54:01.909413    9052 cli_runner.go:164] Run: docker container inspect force-systemd-env-474000 --format={{.State.Status}}
	W0729 04:54:01.929264    9052 cli_runner.go:211] docker container inspect force-systemd-env-474000 --format={{.State.Status}} returned with exit code 1
	I0729 04:54:01.929312    9052 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 04:54:01.929322    9052 oci.go:664] temporary error: container force-systemd-env-474000 status is  but expect it to be exited
	I0729 04:54:01.929354    9052 oci.go:88] couldn't shut down force-systemd-env-474000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	 
	I0729 04:54:01.929436    9052 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-474000
	I0729 04:54:01.946571    9052 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-474000
	W0729 04:54:01.963256    9052 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-474000 returned with exit code 1
	I0729 04:54:01.963371    9052 cli_runner.go:164] Run: docker network inspect force-systemd-env-474000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:54:01.980765    9052 cli_runner.go:164] Run: docker network rm force-systemd-env-474000
	I0729 04:54:02.093695    9052 fix.go:124] Sleeping 1 second for extra luck!
	I0729 04:54:03.095254    9052 start.go:125] createHost starting for "" (driver="docker")
	I0729 04:54:03.118333    9052 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0729 04:54:03.118495    9052 start.go:159] libmachine.API.Create for "force-systemd-env-474000" (driver="docker")
	I0729 04:54:03.118526    9052 client.go:168] LocalClient.Create starting
	I0729 04:54:03.118751    9052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 04:54:03.118852    9052 main.go:141] libmachine: Decoding PEM data...
	I0729 04:54:03.118879    9052 main.go:141] libmachine: Parsing certificate...
	I0729 04:54:03.119013    9052 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 04:54:03.119070    9052 main.go:141] libmachine: Decoding PEM data...
	I0729 04:54:03.119087    9052 main.go:141] libmachine: Parsing certificate...
	I0729 04:54:03.119893    9052 cli_runner.go:164] Run: docker network inspect force-systemd-env-474000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 04:54:03.137911    9052 cli_runner.go:211] docker network inspect force-systemd-env-474000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 04:54:03.138024    9052 network_create.go:284] running [docker network inspect force-systemd-env-474000] to gather additional debugging logs...
	I0729 04:54:03.138040    9052 cli_runner.go:164] Run: docker network inspect force-systemd-env-474000
	W0729 04:54:03.154985    9052 cli_runner.go:211] docker network inspect force-systemd-env-474000 returned with exit code 1
	I0729 04:54:03.155012    9052 network_create.go:287] error running [docker network inspect force-systemd-env-474000]: docker network inspect force-systemd-env-474000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-474000 not found
	I0729 04:54:03.155027    9052 network_create.go:289] output of [docker network inspect force-systemd-env-474000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-474000 not found
	
	** /stderr **
	I0729 04:54:03.155193    9052 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:54:03.174058    9052 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:54:03.175650    9052 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:54:03.177318    9052 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:54:03.179132    9052 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:54:03.180747    9052 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:54:03.182590    9052 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:54:03.183167    9052 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001684c10}
	I0729 04:54:03.183184    9052 network_create.go:124] attempt to create docker network force-systemd-env-474000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 65535 ...
	I0729 04:54:03.183278    9052 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-474000 force-systemd-env-474000
	I0729 04:54:03.246632    9052 network_create.go:108] docker network force-systemd-env-474000 192.168.103.0/24 created
	I0729 04:54:03.246675    9052 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-env-474000" container
	I0729 04:54:03.246778    9052 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 04:54:03.266118    9052 cli_runner.go:164] Run: docker volume create force-systemd-env-474000 --label name.minikube.sigs.k8s.io=force-systemd-env-474000 --label created_by.minikube.sigs.k8s.io=true
	I0729 04:54:03.282900    9052 oci.go:103] Successfully created a docker volume force-systemd-env-474000
	I0729 04:54:03.283021    9052 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-474000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-474000 --entrypoint /usr/bin/test -v force-systemd-env-474000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 04:54:03.535099    9052 oci.go:107] Successfully prepared a docker volume force-systemd-env-474000
	I0729 04:54:03.535137    9052 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:54:03.535150    9052 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 04:54:03.535251    9052 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-474000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 05:00:03.119309    9052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 05:00:03.119436    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:03.138628    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:03.138729    9052 retry.go:31] will retry after 177.998888ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:03.317795    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:03.336625    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:03.336729    9052 retry.go:31] will retry after 267.832641ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:03.606912    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:03.626258    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:03.626384    9052 retry.go:31] will retry after 368.207648ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:03.996282    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:04.016135    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:04.016246    9052 retry.go:31] will retry after 584.595028ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:04.601127    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:04.619850    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	W0729 05:00:04.619974    9052 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	
	W0729 05:00:04.620001    9052 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:04.620062    9052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 05:00:04.620127    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:04.637107    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:04.637208    9052 retry.go:31] will retry after 344.62144ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:04.984235    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:05.003492    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:05.003608    9052 retry.go:31] will retry after 194.000637ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:05.200028    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:05.219859    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:05.219968    9052 retry.go:31] will retry after 528.028337ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:05.748826    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:05.769581    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	W0729 05:00:05.769690    9052 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	
	W0729 05:00:05.769707    9052 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:05.769717    9052 start.go:128] duration metric: took 6m2.675325478s to createHost
	I0729 05:00:05.769786    9052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 05:00:05.769848    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:05.788380    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:05.788474    9052 retry.go:31] will retry after 197.02398ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:05.985876    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:06.004612    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:06.004704    9052 retry.go:31] will retry after 478.649622ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:06.485772    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:06.505798    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:06.505889    9052 retry.go:31] will retry after 835.943803ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:07.342970    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:07.363297    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	W0729 05:00:07.363394    9052 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	
	W0729 05:00:07.363410    9052 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:07.363474    9052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 05:00:07.363530    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:07.381151    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:07.381242    9052 retry.go:31] will retry after 210.299171ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:07.593939    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:07.613903    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:07.613996    9052 retry.go:31] will retry after 209.120027ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:07.825588    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:07.845795    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:07.845888    9052 retry.go:31] will retry after 838.587756ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:08.686902    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:08.707186    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	I0729 05:00:08.707290    9052 retry.go:31] will retry after 525.684906ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:09.235371    9052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000
	W0729 05:00:09.255180    9052 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000 returned with exit code 1
	W0729 05:00:09.255284    9052 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	
	W0729 05:00:09.255302    9052 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-474000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-474000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	I0729 05:00:09.255328    9052 fix.go:56] duration metric: took 6m27.729771425s for fixHost
	I0729 05:00:09.255335    9052 start.go:83] releasing machines lock for "force-systemd-env-474000", held for 6m27.729824004s
	W0729 05:00:09.255415    9052 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-474000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-474000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 05:00:09.298961    9052 out.go:177] 
	W0729 05:00:09.321058    9052 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 05:00:09.321125    9052 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0729 05:00:09.321153    9052 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0729 05:00:09.342876    9052 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-474000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-474000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-474000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (162.607269ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-env-474000 host status: state: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000
	

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-474000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-29 05:00:09.563593 -0700 PDT m=+6039.897700905
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-474000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-474000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-474000",
	        "Id": "77f61624dfd4112f4c65b2b5e787a21aac89e28f2d8f6d946097a59eaba0f33e",
	        "Created": "2024-07-29T11:54:03.198834518Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-474000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-474000 -n force-systemd-env-474000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-474000 -n force-systemd-env-474000: exit status 7 (73.493925ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 05:00:09.656817    9596 status.go:249] status error: host: state: unknown state "force-systemd-env-474000": docker container inspect force-systemd-env-474000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-474000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-474000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-474000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-474000
--- FAIL: TestForceSystemdEnv (758.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (885.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-051000 ssh -- ls /minikube-host
E0729 03:45:16.711417    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:48:53.654524    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:49:14.203444    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:50:37.251430    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:53:53.650765    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:54:14.199311    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:58:53.645417    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:59:14.194465    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-051000 ssh -- ls /minikube-host: signal: killed (14m45.036364338s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-051000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-051000
helpers_test.go:235: (dbg) docker inspect mount-start-2-051000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6c98e8f0ac6df285f77f2d6caf5f210cf2b18cdd159502d242ba319ee8495d7",
	        "Created": "2024-07-29T10:44:26.070165538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 128274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-29T10:44:26.168342337Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f7a7de1851ee150766e4477ba0f200b8a850318ef537b8ef6899afcaea59940a",
	        "ResolvConfPath": "/var/lib/docker/containers/d6c98e8f0ac6df285f77f2d6caf5f210cf2b18cdd159502d242ba319ee8495d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6c98e8f0ac6df285f77f2d6caf5f210cf2b18cdd159502d242ba319ee8495d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6c98e8f0ac6df285f77f2d6caf5f210cf2b18cdd159502d242ba319ee8495d7/hosts",
	        "LogPath": "/var/lib/docker/containers/d6c98e8f0ac6df285f77f2d6caf5f210cf2b18cdd159502d242ba319ee8495d7/d6c98e8f0ac6df285f77f2d6caf5f210cf2b18cdd159502d242ba319ee8495d7-json.log",
	        "Name": "/mount-start-2-051000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-051000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-051000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/23f4e51d54f160e9f6195723d793cba4d4f838b7dd3243c946ade79ffd922b19-init/diff:/var/lib/docker/overlay2/12a0c688492e59cc144289771e1eec036f1039dccfcf0411a6333d30e91ca0e0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23f4e51d54f160e9f6195723d793cba4d4f838b7dd3243c946ade79ffd922b19/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23f4e51d54f160e9f6195723d793cba4d4f838b7dd3243c946ade79ffd922b19/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23f4e51d54f160e9f6195723d793cba4d4f838b7dd3243c946ade79ffd922b19/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-051000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-051000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-051000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-051000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-051000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ef73cb79001a66d99124fe3bb9c3ae451455bac0f02e2132967b3ab5ebbdfaee",
	            "SandboxKey": "/var/run/docker/netns/ef73cb79001a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51755"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51756"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51757"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51758"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51759"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-051000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "1a597111db3123dcfe41037a5925f3976d4b611981cc54f7fde6c2d37fbd9346",
	                    "EndpointID": "9eceef39ffc92b017485d9fc8fa35eadcbf72153986e20150928fcdc829674d8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "mount-start-2-051000",
	                        "d6c98e8f0ac6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-051000 -n mount-start-2-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-051000 -n mount-start-2-051000: exit status 6 (245.769094ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 03:59:17.151395    7021 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-051000" does not appear in /Users/jenkins/minikube-integration/19337-1372/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-051000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountSecond (885.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (749.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-975000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0729 04:01:56.698180    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:03:53.641001    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:04:14.190293    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 04:07:17.301801    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 04:08:53.700115    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:09:14.248534    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-975000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m29.746731473s)

                                                
                                                
-- stdout --
	* [multinode-975000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "multinode-975000" primary control-plane node in "multinode-975000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-975000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:00:26.336237    7357 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:00:26.336503    7357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:00:26.336508    7357 out.go:304] Setting ErrFile to fd 2...
	I0729 04:00:26.336512    7357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:00:26.336674    7357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:00:26.338143    7357 out.go:298] Setting JSON to false
	I0729 04:00:26.360564    7357 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3596,"bootTime":1722247230,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 04:00:26.360658    7357 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:00:26.382515    7357 out.go:177] * [multinode-975000] minikube v1.33.1 on Darwin 14.5
	I0729 04:00:26.424239    7357 notify.go:220] Checking for updates...
	I0729 04:00:26.444970    7357 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 04:00:26.503276    7357 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	I0729 04:00:26.545092    7357 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 04:00:26.566392    7357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:00:26.587251    7357 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	I0729 04:00:26.608152    7357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:00:26.629730    7357 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:00:26.654379    7357 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 04:00:26.654541    7357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 04:00:26.733954    7357 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:74 SystemTime:2024-07-29 11:00:26.724822388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 04:00:26.755929    7357 out.go:177] * Using the docker driver based on user configuration
	I0729 04:00:26.777711    7357 start.go:297] selected driver: docker
	I0729 04:00:26.777736    7357 start.go:901] validating driver "docker" against <nil>
	I0729 04:00:26.777751    7357 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:00:26.782203    7357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 04:00:26.862267    7357 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:74 SystemTime:2024-07-29 11:00:26.853863601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 04:00:26.862462    7357 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 04:00:26.862647    7357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:00:26.883747    7357 out.go:177] * Using Docker Desktop driver with root privileges
	I0729 04:00:26.904560    7357 cni.go:84] Creating CNI manager for ""
	I0729 04:00:26.904588    7357 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 04:00:26.904600    7357 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 04:00:26.904709    7357 start.go:340] cluster config:
	{Name:multinode-975000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-975000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:00:26.926782    7357 out.go:177] * Starting "multinode-975000" primary control-plane node in "multinode-975000" cluster
	I0729 04:00:26.968501    7357 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 04:00:26.989686    7357 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 04:00:27.031584    7357 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:00:27.031638    7357 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 04:00:27.031659    7357 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 04:00:27.031694    7357 cache.go:56] Caching tarball of preloaded images
	I0729 04:00:27.031940    7357 preload.go:172] Found /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 04:00:27.031959    7357 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:00:27.033475    7357 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/multinode-975000/config.json ...
	I0729 04:00:27.033598    7357 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/multinode-975000/config.json: {Name:mk5235fb7c4721ed483015b30afc7d8e48e03d71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0729 04:00:27.057249    7357 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 04:00:27.057259    7357 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 04:00:27.057383    7357 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 04:00:27.057400    7357 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 04:00:27.057406    7357 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 04:00:27.057414    7357 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 04:00:27.057419    7357 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 04:00:27.060262    7357 image.go:273] response: 
	I0729 04:00:27.188602    7357 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 04:00:27.188649    7357 cache.go:194] Successfully downloaded all kic artifacts
	I0729 04:00:27.188697    7357 start.go:360] acquireMachinesLock for multinode-975000: {Name:mk56d69be69adad7dd08096217f5da7f0ad36bac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:00:27.188875    7357 start.go:364] duration metric: took 165.135µs to acquireMachinesLock for "multinode-975000"
	I0729 04:00:27.188903    7357 start.go:93] Provisioning new machine with config: &{Name:multinode-975000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-975000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0729 04:00:27.188957    7357 start.go:125] createHost starting for "" (driver="docker")
	I0729 04:00:27.231949    7357 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0729 04:00:27.232142    7357 start.go:159] libmachine.API.Create for "multinode-975000" (driver="docker")
	I0729 04:00:27.232171    7357 client.go:168] LocalClient.Create starting
	I0729 04:00:27.232289    7357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 04:00:27.232339    7357 main.go:141] libmachine: Decoding PEM data...
	I0729 04:00:27.232356    7357 main.go:141] libmachine: Parsing certificate...
	I0729 04:00:27.232411    7357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 04:00:27.232449    7357 main.go:141] libmachine: Decoding PEM data...
	I0729 04:00:27.232457    7357 main.go:141] libmachine: Parsing certificate...
	I0729 04:00:27.232979    7357 cli_runner.go:164] Run: docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 04:00:27.250288    7357 cli_runner.go:211] docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 04:00:27.250393    7357 network_create.go:284] running [docker network inspect multinode-975000] to gather additional debugging logs...
	I0729 04:00:27.250410    7357 cli_runner.go:164] Run: docker network inspect multinode-975000
	W0729 04:00:27.267442    7357 cli_runner.go:211] docker network inspect multinode-975000 returned with exit code 1
	I0729 04:00:27.267476    7357 network_create.go:287] error running [docker network inspect multinode-975000]: docker network inspect multinode-975000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-975000 not found
	I0729 04:00:27.267487    7357 network_create.go:289] output of [docker network inspect multinode-975000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-975000 not found
	
	** /stderr **
	I0729 04:00:27.267614    7357 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:00:27.286314    7357 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:00:27.287729    7357 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:00:27.288085    7357 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00151f000}
	I0729 04:00:27.288126    7357 network_create.go:124] attempt to create docker network multinode-975000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0729 04:00:27.288202    7357 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-975000 multinode-975000
	I0729 04:00:27.351520    7357 network_create.go:108] docker network multinode-975000 192.168.67.0/24 created
	I0729 04:00:27.351555    7357 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-975000" container
	I0729 04:00:27.351673    7357 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 04:00:27.369321    7357 cli_runner.go:164] Run: docker volume create multinode-975000 --label name.minikube.sigs.k8s.io=multinode-975000 --label created_by.minikube.sigs.k8s.io=true
	I0729 04:00:27.387422    7357 oci.go:103] Successfully created a docker volume multinode-975000
	I0729 04:00:27.387548    7357 cli_runner.go:164] Run: docker run --rm --name multinode-975000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-975000 --entrypoint /usr/bin/test -v multinode-975000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 04:00:27.772466    7357 oci.go:107] Successfully prepared a docker volume multinode-975000
	I0729 04:00:27.772532    7357 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:00:27.772554    7357 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 04:00:27.772736    7357 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-975000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 04:06:27.295367    7357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 04:06:27.298179    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:06:27.315784    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:06:27.315898    7357 retry.go:31] will retry after 211.959996ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:27.530260    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:06:27.549431    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:06:27.549548    7357 retry.go:31] will retry after 301.857336ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:27.851700    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:06:27.869921    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:06:27.870034    7357 retry.go:31] will retry after 627.124369ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:28.499563    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:06:28.518500    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:06:28.518603    7357 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:06:28.518628    7357 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:28.518689    7357 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 04:06:28.518746    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:06:28.536585    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:06:28.536678    7357 retry.go:31] will retry after 164.053711ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:28.703137    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:06:28.722174    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:06:28.722271    7357 retry.go:31] will retry after 560.267331ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:29.283640    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:06:29.303895    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:06:29.303990    7357 retry.go:31] will retry after 643.166211ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:29.949542    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:06:29.969847    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:06:29.969956    7357 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:06:29.969989    7357 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:29.970006    7357 start.go:128] duration metric: took 6m2.723545448s to createHost
	I0729 04:06:29.970012    7357 start.go:83] releasing machines lock for "multinode-975000", held for 6m2.723637373s
	W0729 04:06:29.970028    7357 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I0729 04:06:29.970473    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:29.987555    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:06:29.987615    7357 delete.go:82] Unable to get host status for multinode-975000, assuming it has already been deleted: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	W0729 04:06:29.987693    7357 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0729 04:06:29.987701    7357 start.go:729] Will try again in 5 seconds ...
	I0729 04:06:34.989694    7357 start.go:360] acquireMachinesLock for multinode-975000: {Name:mk56d69be69adad7dd08096217f5da7f0ad36bac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:06:34.989978    7357 start.go:364] duration metric: took 238.389µs to acquireMachinesLock for "multinode-975000"
	I0729 04:06:34.990034    7357 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:06:34.990068    7357 fix.go:54] fixHost starting: 
	I0729 04:06:34.990516    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:35.009251    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:06:35.009299    7357 fix.go:112] recreateIfNeeded on multinode-975000: state= err=unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:35.009322    7357 fix.go:117] machineExists: false. err=machine does not exist
	I0729 04:06:35.031305    7357 out.go:177] * docker "multinode-975000" container is missing, will recreate.
	I0729 04:06:35.053005    7357 delete.go:124] DEMOLISHING multinode-975000 ...
	I0729 04:06:35.053201    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:35.071460    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	W0729 04:06:35.071508    7357 stop.go:83] unable to get state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:35.071526    7357 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:35.071899    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:35.088914    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:06:35.088966    7357 delete.go:82] Unable to get host status for multinode-975000, assuming it has already been deleted: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:35.089066    7357 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-975000
	W0729 04:06:35.105832    7357 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-975000 returned with exit code 1
	I0729 04:06:35.105875    7357 kic.go:371] could not find the container multinode-975000 to remove it. will try anyways
	I0729 04:06:35.105956    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:35.122658    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	W0729 04:06:35.122719    7357 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:35.122809    7357 cli_runner.go:164] Run: docker exec --privileged -t multinode-975000 /bin/bash -c "sudo init 0"
	W0729 04:06:35.139870    7357 cli_runner.go:211] docker exec --privileged -t multinode-975000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 04:06:35.139902    7357 oci.go:650] error shutdown multinode-975000: docker exec --privileged -t multinode-975000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:36.142274    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:36.162890    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:06:36.162935    7357 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:36.162945    7357 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:06:36.162969    7357 retry.go:31] will retry after 398.897192ms: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:36.562432    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:36.581982    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:06:36.582027    7357 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:36.582040    7357 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:06:36.582063    7357 retry.go:31] will retry after 534.775426ms: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:37.118086    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:37.138134    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:06:37.138182    7357 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:37.138195    7357 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:06:37.138229    7357 retry.go:31] will retry after 1.320789776s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:38.461318    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:38.480536    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:06:38.480583    7357 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:38.480594    7357 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:06:38.480617    7357 retry.go:31] will retry after 1.770558793s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:40.251920    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:40.272442    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:06:40.272488    7357 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:40.272497    7357 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:06:40.272523    7357 retry.go:31] will retry after 1.383812369s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:41.657847    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:41.677656    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:06:41.677704    7357 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:41.677715    7357 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:06:41.677736    7357 retry.go:31] will retry after 3.542492182s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:45.221112    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:45.240962    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:06:45.241009    7357 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:45.241024    7357 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:06:45.241051    7357 retry.go:31] will retry after 3.253086092s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:48.496566    7357 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:06:48.515597    7357 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:06:48.515640    7357 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:06:48.515650    7357 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:06:48.515684    7357 oci.go:88] couldn't shut down multinode-975000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	 
	I0729 04:06:48.515770    7357 cli_runner.go:164] Run: docker rm -f -v multinode-975000
	I0729 04:06:48.533830    7357 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-975000
	W0729 04:06:48.551819    7357 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-975000 returned with exit code 1
	I0729 04:06:48.551935    7357 cli_runner.go:164] Run: docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:06:48.570360    7357 cli_runner.go:164] Run: docker network rm multinode-975000
	I0729 04:06:48.657019    7357 fix.go:124] Sleeping 1 second for extra luck!
	I0729 04:06:49.659294    7357 start.go:125] createHost starting for "" (driver="docker")
	I0729 04:06:49.684225    7357 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0729 04:06:49.684395    7357 start.go:159] libmachine.API.Create for "multinode-975000" (driver="docker")
	I0729 04:06:49.684434    7357 client.go:168] LocalClient.Create starting
	I0729 04:06:49.684663    7357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 04:06:49.684756    7357 main.go:141] libmachine: Decoding PEM data...
	I0729 04:06:49.684783    7357 main.go:141] libmachine: Parsing certificate...
	I0729 04:06:49.684868    7357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 04:06:49.684961    7357 main.go:141] libmachine: Decoding PEM data...
	I0729 04:06:49.684977    7357 main.go:141] libmachine: Parsing certificate...
	I0729 04:06:49.685993    7357 cli_runner.go:164] Run: docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 04:06:49.704966    7357 cli_runner.go:211] docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 04:06:49.705060    7357 network_create.go:284] running [docker network inspect multinode-975000] to gather additional debugging logs...
	I0729 04:06:49.705073    7357 cli_runner.go:164] Run: docker network inspect multinode-975000
	W0729 04:06:49.722021    7357 cli_runner.go:211] docker network inspect multinode-975000 returned with exit code 1
	I0729 04:06:49.722048    7357 network_create.go:287] error running [docker network inspect multinode-975000]: docker network inspect multinode-975000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-975000 not found
	I0729 04:06:49.722062    7357 network_create.go:289] output of [docker network inspect multinode-975000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-975000 not found
	
	** /stderr **
	I0729 04:06:49.722216    7357 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:06:49.740914    7357 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:06:49.742504    7357 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:06:49.744047    7357 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:06:49.744378    7357 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000659b90}
	I0729 04:06:49.744391    7357 network_create.go:124] attempt to create docker network multinode-975000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0729 04:06:49.744461    7357 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-975000 multinode-975000
	W0729 04:06:49.761641    7357 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-975000 multinode-975000 returned with exit code 1
	W0729 04:06:49.761683    7357 network_create.go:149] failed to create docker network multinode-975000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-975000 multinode-975000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0729 04:06:49.761702    7357 network_create.go:116] failed to create docker network multinode-975000 192.168.76.0/24, will retry: subnet is taken
	I0729 04:06:49.763286    7357 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:06:49.764678    7357 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:06:49.765428    7357 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000882f70}
	I0729 04:06:49.765443    7357 network_create.go:124] attempt to create docker network multinode-975000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I0729 04:06:49.765517    7357 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-975000 multinode-975000
	I0729 04:06:49.827978    7357 network_create.go:108] docker network multinode-975000 192.168.94.0/24 created
	I0729 04:06:49.828010    7357 kic.go:121] calculated static IP "192.168.94.2" for the "multinode-975000" container
	I0729 04:06:49.828132    7357 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 04:06:49.845941    7357 cli_runner.go:164] Run: docker volume create multinode-975000 --label name.minikube.sigs.k8s.io=multinode-975000 --label created_by.minikube.sigs.k8s.io=true
	I0729 04:06:49.862818    7357 oci.go:103] Successfully created a docker volume multinode-975000
	I0729 04:06:49.862952    7357 cli_runner.go:164] Run: docker run --rm --name multinode-975000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-975000 --entrypoint /usr/bin/test -v multinode-975000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 04:06:50.145335    7357 oci.go:107] Successfully prepared a docker volume multinode-975000
	I0729 04:06:50.145376    7357 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:06:50.145393    7357 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 04:06:50.145505    7357 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-975000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 04:12:49.679007    7357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 04:12:49.679220    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:49.700489    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:49.700600    7357 retry.go:31] will retry after 356.54384ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:50.058672    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:50.077546    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:50.077661    7357 retry.go:31] will retry after 358.616549ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:50.438521    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:50.458292    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:50.458394    7357 retry.go:31] will retry after 617.00582ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:51.077840    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:51.097481    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:12:51.097593    7357 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:12:51.097610    7357 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:51.097670    7357 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 04:12:51.097735    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:51.114826    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:51.114921    7357 retry.go:31] will retry after 164.534865ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:51.281098    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:51.299552    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:51.299650    7357 retry.go:31] will retry after 410.612685ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:51.710890    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:51.729342    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:51.729437    7357 retry.go:31] will retry after 596.027095ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:52.327889    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:52.347109    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:12:52.347212    7357 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:12:52.347227    7357 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:52.347241    7357 start.go:128] duration metric: took 6m2.69395993s to createHost
	I0729 04:12:52.347312    7357 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 04:12:52.347374    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:52.365069    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:52.365169    7357 retry.go:31] will retry after 205.204289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:52.570970    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:52.590228    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:52.590326    7357 retry.go:31] will retry after 517.905987ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:53.110645    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:53.129857    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:53.129952    7357 retry.go:31] will retry after 826.103775ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:53.958283    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:53.978148    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:12:53.978248    7357 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:12:53.978266    7357 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:53.978325    7357 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 04:12:53.978378    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:53.995933    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:53.996024    7357 retry.go:31] will retry after 351.922081ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:54.350333    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:54.370348    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:54.370443    7357 retry.go:31] will retry after 439.183519ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:54.812040    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:54.831030    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:54.831153    7357 retry.go:31] will retry after 323.208033ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:55.154771    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:55.174411    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:12:55.174503    7357 retry.go:31] will retry after 731.38037ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:55.908222    7357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:12:55.928022    7357 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:12:55.928151    7357 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:12:55.928173    7357 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:12:55.928184    7357 fix.go:56] duration metric: took 6m20.94446315s for fixHost
	I0729 04:12:55.928191    7357 start.go:83] releasing machines lock for "multinode-975000", held for 6m20.944530232s
	W0729 04:12:55.928289    7357 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-975000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-975000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 04:12:55.971821    7357 out.go:177] 
	W0729 04:12:55.993009    7357 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 04:12:55.993092    7357 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0729 04:12:55.993186    7357 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0729 04:12:56.036705    7357 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-975000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "dc3a54bb56fab08b69ca94a832d6b4db3cef6d149bddefeb7c930f3b79f7c965",
	        "Created": "2024-07-29T11:06:49.780680208Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (76.864982ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:12:56.190092    7602 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (749.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (85.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (103.160842ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-975000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- rollout status deployment/busybox: exit status 1 (98.398611ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.874637ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.514438ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.116837ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.273696ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.869174ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.950917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.429944ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.663761ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.025052ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0729 04:13:53.694961    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:14:14.243603    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.300842ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (99.274432ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- exec  -- nslookup kubernetes.io: exit status 1 (100.467746ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- exec  -- nslookup kubernetes.default: exit status 1 (100.124499ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (99.582817ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "dc3a54bb56fab08b69ca94a832d6b4db3cef6d149bddefeb7c930f3b79f7c965",
	        "Created": "2024-07-29T11:06:49.780680208Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (73.625615ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:14:21.339716    7665 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (85.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-975000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (99.27457ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-975000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "dc3a54bb56fab08b69ca94a832d6b4db3cef6d149bddefeb7c930f3b79f7c965",
	        "Created": "2024-07-29T11:06:49.780680208Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (73.527915ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:14:21.533814    7672 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-975000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-975000 -v 3 --alsologtostderr: exit status 80 (160.848021ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:14:21.588539    7675 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:14:21.589507    7675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:21.589514    7675 out.go:304] Setting ErrFile to fd 2...
	I0729 04:14:21.589518    7675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:21.589697    7675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:14:21.590028    7675 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:14:21.590300    7675 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:14:21.590686    7675 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:14:21.607561    7675 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:14:21.629301    7675 out.go:177] 
	W0729 04:14:21.650252    7675 out.go:239] X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-975000 host status: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-975000 host status: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	I0729 04:14:21.671963    7675 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-975000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "dc3a54bb56fab08b69ca94a832d6b4db3cef6d149bddefeb7c930f3b79f7c965",
	        "Created": "2024-07-29T11:06:49.780680208Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (73.081299ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:14:21.789614    7679 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-975000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-975000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (36.809955ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-975000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-975000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-975000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "dc3a54bb56fab08b69ca94a832d6b4db3cef6d149bddefeb7c930f3b79f7c965",
	        "Created": "2024-07-29T11:06:49.780680208Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (72.713964ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:14:21.920436    7684 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:166: expected profile "multinode-975000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-051000\",\"Status\":\"\",\"Config\":null,\"Active\":false,\"ActiveKubeContext\":false}],\"valid\":[{\"Name\":\"multinode-975000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-975000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":
false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-975000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"
KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"A
utoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "dc3a54bb56fab08b69ca94a832d6b4db3cef6d149bddefeb7c930f3b79f7c965",
	        "Created": "2024-07-29T11:06:49.780680208Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (73.671744ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:14:22.130824    7692 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status --output json --alsologtostderr: exit status 7 (72.634383ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-975000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:14:22.184778    7695 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:14:22.185051    7695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:22.185056    7695 out.go:304] Setting ErrFile to fd 2...
	I0729 04:14:22.185060    7695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:22.185245    7695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:14:22.185431    7695 out.go:298] Setting JSON to true
	I0729 04:14:22.185453    7695 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:14:22.185490    7695 notify.go:220] Checking for updates...
	I0729 04:14:22.185714    7695 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:14:22.185728    7695 status.go:255] checking status of multinode-975000 ...
	I0729 04:14:22.186134    7695 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:14:22.203546    7695 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:14:22.203610    7695 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:14:22.203631    7695 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:14:22.203663    7695 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:14:22.203671    7695 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-975000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "dc3a54bb56fab08b69ca94a832d6b4db3cef6d149bddefeb7c930f3b79f7c965",
	        "Created": "2024-07-29T11:06:49.780680208Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (72.434938ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:14:22.297097    7699 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 node stop m03: exit status 85 (146.781227ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-975000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status: exit status 7 (73.244254ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:14:22.517859    7704 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:14:22.517871    7704 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status --alsologtostderr: exit status 7 (73.905806ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:14:22.572918    7707 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:14:22.573194    7707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:22.573200    7707 out.go:304] Setting ErrFile to fd 2...
	I0729 04:14:22.573204    7707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:22.573387    7707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:14:22.573561    7707 out.go:298] Setting JSON to false
	I0729 04:14:22.573583    7707 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:14:22.573619    7707 notify.go:220] Checking for updates...
	I0729 04:14:22.573869    7707 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:14:22.573882    7707 status.go:255] checking status of multinode-975000 ...
	I0729 04:14:22.574269    7707 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:14:22.591748    7707 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:14:22.591815    7707 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:14:22.591834    7707 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:14:22.591858    7707 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:14:22.591865    7707 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-975000 status --alsologtostderr": multinode-975000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:271: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-975000 status --alsologtostderr": multinode-975000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-975000 status --alsologtostderr": multinode-975000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "dc3a54bb56fab08b69ca94a832d6b4db3cef6d149bddefeb7c930f3b79f7c965",
	        "Created": "2024-07-29T11:06:49.780680208Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (73.968763ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:14:22.686591    7711 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (56.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 node start m03 -v=7 --alsologtostderr: exit status 85 (146.599286ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:14:22.741446    7714 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:14:22.741817    7714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:22.741824    7714 out.go:304] Setting ErrFile to fd 2...
	I0729 04:14:22.741828    7714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:22.742006    7714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:14:22.742330    7714 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:14:22.742592    7714 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:14:22.764891    7714 out.go:177] 
	W0729 04:14:22.786622    7714 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0729 04:14:22.786649    7714 out.go:239] * 
	* 
	W0729 04:14:22.790818    7714 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 04:14:22.811522    7714 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0729 04:14:22.741446    7714 out.go:291] Setting OutFile to fd 1 ...
I0729 04:14:22.741817    7714 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:14:22.741824    7714 out.go:304] Setting ErrFile to fd 2...
I0729 04:14:22.741828    7714 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 04:14:22.742006    7714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
I0729 04:14:22.742330    7714 mustload.go:65] Loading cluster: multinode-975000
I0729 04:14:22.742592    7714 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 04:14:22.764891    7714 out.go:177] 
W0729 04:14:22.786622    7714 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0729 04:14:22.786649    7714 out.go:239] * 
* 
W0729 04:14:22.790818    7714 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0729 04:14:22.811522    7714 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-975000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr: exit status 7 (73.420663ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:14:22.888328    7716 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:14:22.888511    7716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:22.888516    7716 out.go:304] Setting ErrFile to fd 2...
	I0729 04:14:22.888520    7716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:22.888689    7716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:14:22.888868    7716 out.go:298] Setting JSON to false
	I0729 04:14:22.888898    7716 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:14:22.888936    7716 notify.go:220] Checking for updates...
	I0729 04:14:22.889191    7716 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:14:22.889204    7716 status.go:255] checking status of multinode-975000 ...
	I0729 04:14:22.889580    7716 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:14:22.906988    7716 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:14:22.907047    7716 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:14:22.907072    7716 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:14:22.907094    7716 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:14:22.907102    7716 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr: exit status 7 (78.566393ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:14:23.847471    7719 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:14:23.847687    7719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:23.847693    7719 out.go:304] Setting ErrFile to fd 2...
	I0729 04:14:23.847696    7719 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:23.847895    7719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:14:23.848066    7719 out.go:298] Setting JSON to false
	I0729 04:14:23.848106    7719 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:14:23.848149    7719 notify.go:220] Checking for updates...
	I0729 04:14:23.848410    7719 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:14:23.848427    7719 status.go:255] checking status of multinode-975000 ...
	I0729 04:14:23.848886    7719 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:14:23.867546    7719 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:14:23.867609    7719 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:14:23.867632    7719 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:14:23.867657    7719 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:14:23.867671    7719 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr: exit status 7 (79.251569ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:14:25.483748    7724 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:14:25.483938    7724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:25.483943    7724 out.go:304] Setting ErrFile to fd 2...
	I0729 04:14:25.483947    7724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:25.484119    7724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:14:25.484299    7724 out.go:298] Setting JSON to false
	I0729 04:14:25.484323    7724 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:14:25.484822    7724 notify.go:220] Checking for updates...
	I0729 04:14:25.485617    7724 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:14:25.485634    7724 status.go:255] checking status of multinode-975000 ...
	I0729 04:14:25.486021    7724 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:14:25.503533    7724 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:14:25.503603    7724 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:14:25.503623    7724 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:14:25.503646    7724 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:14:25.503654    7724 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr: exit status 7 (79.401448ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:14:28.235051    7727 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:14:28.235276    7727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:28.235281    7727 out.go:304] Setting ErrFile to fd 2...
	I0729 04:14:28.235285    7727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:28.235465    7727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:14:28.235671    7727 out.go:298] Setting JSON to false
	I0729 04:14:28.235731    7727 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:14:28.235783    7727 notify.go:220] Checking for updates...
	I0729 04:14:28.235993    7727 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:14:28.236007    7727 status.go:255] checking status of multinode-975000 ...
	I0729 04:14:28.236443    7727 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:14:28.254961    7727 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:14:28.255028    7727 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:14:28.255047    7727 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:14:28.255071    7727 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:14:28.255078    7727 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr: exit status 7 (78.196166ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:14:30.264078    7730 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:14:30.264265    7730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:30.264271    7730 out.go:304] Setting ErrFile to fd 2...
	I0729 04:14:30.264274    7730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:30.264446    7730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:14:30.264628    7730 out.go:298] Setting JSON to false
	I0729 04:14:30.264649    7730 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:14:30.264689    7730 notify.go:220] Checking for updates...
	I0729 04:14:30.264906    7730 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:14:30.264920    7730 status.go:255] checking status of multinode-975000 ...
	I0729 04:14:30.265306    7730 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:14:30.283326    7730 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:14:30.283387    7730 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:14:30.283413    7730 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:14:30.283439    7730 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:14:30.283445    7730 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr: exit status 7 (76.770618ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:14:35.717504    7733 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:14:35.717695    7733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:35.717700    7733 out.go:304] Setting ErrFile to fd 2...
	I0729 04:14:35.717704    7733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:35.718259    7733 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:14:35.718664    7733 out.go:298] Setting JSON to false
	I0729 04:14:35.718735    7733 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:14:35.718884    7733 notify.go:220] Checking for updates...
	I0729 04:14:35.719095    7733 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:14:35.719108    7733 status.go:255] checking status of multinode-975000 ...
	I0729 04:14:35.719498    7733 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:14:35.737063    7733 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:14:35.737122    7733 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:14:35.737145    7733 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:14:35.737179    7733 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:14:35.737186    7733 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr: exit status 7 (76.52781ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:14:42.601306    7736 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:14:42.601511    7736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:42.601517    7736 out.go:304] Setting ErrFile to fd 2...
	I0729 04:14:42.601520    7736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:42.601699    7736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:14:42.601881    7736 out.go:298] Setting JSON to false
	I0729 04:14:42.601903    7736 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:14:42.601939    7736 notify.go:220] Checking for updates...
	I0729 04:14:42.602172    7736 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:14:42.602187    7736 status.go:255] checking status of multinode-975000 ...
	I0729 04:14:42.602602    7736 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:14:42.620026    7736 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:14:42.620090    7736 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:14:42.620112    7736 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:14:42.620136    7736 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:14:42.620143    7736 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr: exit status 7 (79.570245ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:14:53.188554    7739 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:14:53.188886    7739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:53.188891    7739 out.go:304] Setting ErrFile to fd 2...
	I0729 04:14:53.188895    7739 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:14:53.189132    7739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:14:53.189342    7739 out.go:298] Setting JSON to false
	I0729 04:14:53.189378    7739 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:14:53.189448    7739 notify.go:220] Checking for updates...
	I0729 04:14:53.189701    7739 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:14:53.189719    7739 status.go:255] checking status of multinode-975000 ...
	I0729 04:14:53.190098    7739 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:14:53.207187    7739 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:14:53.207251    7739 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:14:53.207271    7739 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:14:53.207302    7739 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:14:53.207311    7739 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr: exit status 7 (77.368923ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:15:02.946089    7742 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:15:02.946368    7742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:15:02.946373    7742 out.go:304] Setting ErrFile to fd 2...
	I0729 04:15:02.946377    7742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:15:02.946543    7742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:15:02.946720    7742 out.go:298] Setting JSON to false
	I0729 04:15:02.946742    7742 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:15:02.946782    7742 notify.go:220] Checking for updates...
	I0729 04:15:02.947004    7742 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:15:02.947018    7742 status.go:255] checking status of multinode-975000 ...
	I0729 04:15:02.947441    7742 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:02.965603    7742 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:15:02.965654    7742 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:15:02.965681    7742 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:15:02.965708    7742 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:15:02.965714    7742 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr: exit status 7 (78.20759ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:15:19.333820    7750 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:15:19.334001    7750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:15:19.334006    7750 out.go:304] Setting ErrFile to fd 2...
	I0729 04:15:19.334010    7750 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:15:19.334206    7750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:15:19.334380    7750 out.go:298] Setting JSON to false
	I0729 04:15:19.334401    7750 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:15:19.334438    7750 notify.go:220] Checking for updates...
	I0729 04:15:19.334667    7750 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:15:19.334680    7750 status.go:255] checking status of multinode-975000 ...
	I0729 04:15:19.335072    7750 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:19.353305    7750 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:15:19.353369    7750 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:15:19.353394    7750 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:15:19.353417    7750 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:15:19.353429    7750 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-975000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "dc3a54bb56fab08b69ca94a832d6b4db3cef6d149bddefeb7c930f3b79f7c965",
	        "Created": "2024-07-29T11:06:49.780680208Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (72.910658ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:15:19.447382    7754 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (56.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (792.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-975000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-975000
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-975000: exit status 82 (12.597012564s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-975000"  ...
	* Stopping node "multinode-975000"  ...
	* Stopping node "multinode-975000"  ...
	* Stopping node "multinode-975000"  ...
	* Stopping node "multinode-975000"  ...
	* Stopping node "multinode-975000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-975000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-975000" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-975000 --wait=true -v=8 --alsologtostderr
E0729 04:18:36.747252    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:18:53.688957    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:19:14.239181    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 04:23:53.684580    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:23:57.285642    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 04:24:14.234689    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-975000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m59.780646956s)

                                                
                                                
-- stdout --
	* [multinode-975000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-975000" primary control-plane node in "multinode-975000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* docker "multinode-975000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-975000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:15:32.158762    7769 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:15:32.159109    7769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:15:32.159120    7769 out.go:304] Setting ErrFile to fd 2...
	I0729 04:15:32.159128    7769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:15:32.159471    7769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:15:32.161430    7769 out.go:298] Setting JSON to false
	I0729 04:15:32.185707    7769 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4502,"bootTime":1722247230,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 04:15:32.185799    7769 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:15:32.207832    7769 out.go:177] * [multinode-975000] minikube v1.33.1 on Darwin 14.5
	I0729 04:15:32.249683    7769 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 04:15:32.249748    7769 notify.go:220] Checking for updates...
	I0729 04:15:32.292359    7769 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	I0729 04:15:32.313816    7769 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 04:15:32.334612    7769 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:15:32.355633    7769 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	I0729 04:15:32.376842    7769 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:15:32.398048    7769 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:15:32.398177    7769 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:15:32.423371    7769 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 04:15:32.423543    7769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 04:15:32.503338    7769 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:98 SystemTime:2024-07-29 11:15:32.494717607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 04:15:32.545639    7769 out.go:177] * Using the docker driver based on existing profile
	I0729 04:15:32.566715    7769 start.go:297] selected driver: docker
	I0729 04:15:32.566764    7769 start.go:901] validating driver "docker" against &{Name:multinode-975000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-975000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:15:32.566880    7769 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:15:32.567093    7769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 04:15:32.645656    7769 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:98 SystemTime:2024-07-29 11:15:32.636777172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 04:15:32.648950    7769 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:15:32.648985    7769 cni.go:84] Creating CNI manager for ""
	I0729 04:15:32.648992    7769 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 04:15:32.649052    7769 start.go:340] cluster config:
	{Name:multinode-975000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-975000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:15:32.691798    7769 out.go:177] * Starting "multinode-975000" primary control-plane node in "multinode-975000" cluster
	I0729 04:15:32.713609    7769 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 04:15:32.734565    7769 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 04:15:32.776847    7769 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:15:32.776926    7769 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 04:15:32.776947    7769 cache.go:56] Caching tarball of preloaded images
	I0729 04:15:32.776977    7769 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 04:15:32.777184    7769 preload.go:172] Found /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 04:15:32.777203    7769 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:15:32.777359    7769 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/multinode-975000/config.json ...
	W0729 04:15:32.802254    7769 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 04:15:32.802272    7769 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 04:15:32.802386    7769 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 04:15:32.802404    7769 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 04:15:32.802411    7769 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 04:15:32.802419    7769 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 04:15:32.802423    7769 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 04:15:32.805304    7769 image.go:273] response: 
	I0729 04:15:32.949874    7769 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 04:15:32.949920    7769 cache.go:194] Successfully downloaded all kic artifacts
	I0729 04:15:32.949964    7769 start.go:360] acquireMachinesLock for multinode-975000: {Name:mk56d69be69adad7dd08096217f5da7f0ad36bac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:15:32.950068    7769 start.go:364] duration metric: took 86.03µs to acquireMachinesLock for "multinode-975000"
	I0729 04:15:32.950093    7769 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:15:32.950102    7769 fix.go:54] fixHost starting: 
	I0729 04:15:32.950355    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:32.967485    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:15:32.967563    7769 fix.go:112] recreateIfNeeded on multinode-975000: state= err=unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:32.967582    7769 fix.go:117] machineExists: false. err=machine does not exist
	I0729 04:15:32.989402    7769 out.go:177] * docker "multinode-975000" container is missing, will recreate.
	I0729 04:15:33.010225    7769 delete.go:124] DEMOLISHING multinode-975000 ...
	I0729 04:15:33.010323    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:33.027378    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	W0729 04:15:33.027422    7769 stop.go:83] unable to get state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:33.027441    7769 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:33.027800    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:33.044601    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:15:33.044650    7769 delete.go:82] Unable to get host status for multinode-975000, assuming it has already been deleted: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:33.044741    7769 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-975000
	W0729 04:15:33.061755    7769 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-975000 returned with exit code 1
	I0729 04:15:33.061788    7769 kic.go:371] could not find the container multinode-975000 to remove it. will try anyways
	I0729 04:15:33.061866    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:33.079085    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	W0729 04:15:33.079140    7769 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:33.079229    7769 cli_runner.go:164] Run: docker exec --privileged -t multinode-975000 /bin/bash -c "sudo init 0"
	W0729 04:15:33.095965    7769 cli_runner.go:211] docker exec --privileged -t multinode-975000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 04:15:33.096013    7769 oci.go:650] error shutdown multinode-975000: docker exec --privileged -t multinode-975000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:34.096180    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:34.113608    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:15:34.113657    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:34.113667    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:15:34.113704    7769 retry.go:31] will retry after 407.604713ms: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:34.523442    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:34.540316    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:15:34.540360    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:34.540371    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:15:34.540394    7769 retry.go:31] will retry after 703.826027ms: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:35.244578    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:35.261950    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:15:35.261992    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:35.262001    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:15:35.262027    7769 retry.go:31] will retry after 963.884743ms: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:36.227230    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:36.244774    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:15:36.244817    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:36.244825    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:15:36.244846    7769 retry.go:31] will retry after 1.462299287s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:37.707386    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:37.724357    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:15:37.724404    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:37.724415    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:15:37.724435    7769 retry.go:31] will retry after 2.293204574s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:40.017870    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:40.036242    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:15:40.036285    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:40.036293    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:15:40.036318    7769 retry.go:31] will retry after 3.473027086s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:43.511614    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:43.531991    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:15:43.532035    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:43.532045    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:15:43.532070    7769 retry.go:31] will retry after 6.200927286s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:49.735355    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:15:49.754711    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:15:49.754753    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:15:49.754763    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:15:49.754797    7769 oci.go:88] couldn't shut down multinode-975000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	 
	I0729 04:15:49.754871    7769 cli_runner.go:164] Run: docker rm -f -v multinode-975000
	I0729 04:15:49.773111    7769 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-975000
	W0729 04:15:49.790990    7769 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-975000 returned with exit code 1
	I0729 04:15:49.791102    7769 cli_runner.go:164] Run: docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:15:49.808332    7769 cli_runner.go:164] Run: docker network rm multinode-975000
	I0729 04:15:49.893631    7769 fix.go:124] Sleeping 1 second for extra luck!
	I0729 04:15:50.895444    7769 start.go:125] createHost starting for "" (driver="docker")
	I0729 04:15:50.917778    7769 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0729 04:15:50.917979    7769 start.go:159] libmachine.API.Create for "multinode-975000" (driver="docker")
	I0729 04:15:50.918020    7769 client.go:168] LocalClient.Create starting
	I0729 04:15:50.918207    7769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 04:15:50.918304    7769 main.go:141] libmachine: Decoding PEM data...
	I0729 04:15:50.918337    7769 main.go:141] libmachine: Parsing certificate...
	I0729 04:15:50.918429    7769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 04:15:50.918511    7769 main.go:141] libmachine: Decoding PEM data...
	I0729 04:15:50.918536    7769 main.go:141] libmachine: Parsing certificate...
	I0729 04:15:50.919507    7769 cli_runner.go:164] Run: docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 04:15:50.937989    7769 cli_runner.go:211] docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 04:15:50.938075    7769 network_create.go:284] running [docker network inspect multinode-975000] to gather additional debugging logs...
	I0729 04:15:50.938089    7769 cli_runner.go:164] Run: docker network inspect multinode-975000
	W0729 04:15:50.956155    7769 cli_runner.go:211] docker network inspect multinode-975000 returned with exit code 1
	I0729 04:15:50.956184    7769 network_create.go:287] error running [docker network inspect multinode-975000]: docker network inspect multinode-975000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-975000 not found
	I0729 04:15:50.956204    7769 network_create.go:289] output of [docker network inspect multinode-975000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-975000 not found
	
	** /stderr **
	I0729 04:15:50.956360    7769 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:15:50.975632    7769 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:15:50.977295    7769 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:15:50.977652    7769 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001826480}
	I0729 04:15:50.977669    7769 network_create.go:124] attempt to create docker network multinode-975000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0729 04:15:50.977751    7769 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-975000 multinode-975000
	I0729 04:15:51.041260    7769 network_create.go:108] docker network multinode-975000 192.168.67.0/24 created
	I0729 04:15:51.041299    7769 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-975000" container
	I0729 04:15:51.041413    7769 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 04:15:51.059221    7769 cli_runner.go:164] Run: docker volume create multinode-975000 --label name.minikube.sigs.k8s.io=multinode-975000 --label created_by.minikube.sigs.k8s.io=true
	I0729 04:15:51.076001    7769 oci.go:103] Successfully created a docker volume multinode-975000
	I0729 04:15:51.076116    7769 cli_runner.go:164] Run: docker run --rm --name multinode-975000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-975000 --entrypoint /usr/bin/test -v multinode-975000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 04:15:51.331776    7769 oci.go:107] Successfully prepared a docker volume multinode-975000
	I0729 04:15:51.331820    7769 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:15:51.331840    7769 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 04:15:51.331992    7769 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-975000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 04:21:50.914139    7769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 04:21:50.914272    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:50.935348    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:50.935467    7769 retry.go:31] will retry after 252.826987ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:51.190661    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:51.211290    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:51.211392    7769 retry.go:31] will retry after 273.98405ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:51.487693    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:51.506942    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:51.507049    7769 retry.go:31] will retry after 542.609834ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:52.050632    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:52.069102    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:52.069215    7769 retry.go:31] will retry after 669.635982ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:52.739586    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:52.759205    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:21:52.759312    7769 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:21:52.759333    7769 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:52.759391    7769 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 04:21:52.759454    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:52.777746    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:52.777851    7769 retry.go:31] will retry after 140.942766ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:52.921245    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:52.941054    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:52.941152    7769 retry.go:31] will retry after 192.190019ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:53.135748    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:53.156012    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:53.156106    7769 retry.go:31] will retry after 736.90543ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:53.894018    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:53.913337    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:53.913438    7769 retry.go:31] will retry after 694.27663ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:54.610103    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:54.629816    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:21:54.629927    7769 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:21:54.629947    7769 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:54.629961    7769 start.go:128] duration metric: took 6m3.740536482s to createHost
	I0729 04:21:54.630035    7769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 04:21:54.630105    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:54.647342    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:54.647433    7769 retry.go:31] will retry after 339.69875ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:54.989593    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:55.009058    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:55.009151    7769 retry.go:31] will retry after 523.274432ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:55.533831    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:55.599429    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:55.599521    7769 retry.go:31] will retry after 719.468457ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:56.320683    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:56.339963    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:21:56.340061    7769 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:21:56.340076    7769 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:56.340143    7769 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 04:21:56.340196    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:56.357350    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:56.357440    7769 retry.go:31] will retry after 253.745785ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:56.613123    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:56.632684    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:56.632770    7769 retry.go:31] will retry after 541.834455ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:57.176486    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:57.195399    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:21:57.195490    7769 retry.go:31] will retry after 766.664066ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:57.963994    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:21:57.983589    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:21:57.983691    7769 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:21:57.983712    7769 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:21:57.983721    7769 fix.go:56] duration metric: took 6m25.040024242s for fixHost
	I0729 04:21:57.983727    7769 start.go:83] releasing machines lock for "multinode-975000", held for 6m25.040055384s
	W0729 04:21:57.983745    7769 start.go:714] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 04:21:57.983810    7769 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 04:21:57.983816    7769 start.go:729] Will try again in 5 seconds ...
	I0729 04:22:02.984444    7769 start.go:360] acquireMachinesLock for multinode-975000: {Name:mk56d69be69adad7dd08096217f5da7f0ad36bac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:22:02.984744    7769 start.go:364] duration metric: took 259.912µs to acquireMachinesLock for "multinode-975000"
	I0729 04:22:02.984785    7769 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:22:02.984795    7769 fix.go:54] fixHost starting: 
	I0729 04:22:02.985253    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:22:03.005785    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:22:03.005831    7769 fix.go:112] recreateIfNeeded on multinode-975000: state= err=unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:03.005848    7769 fix.go:117] machineExists: false. err=machine does not exist
	I0729 04:22:03.049439    7769 out.go:177] * docker "multinode-975000" container is missing, will recreate.
	I0729 04:22:03.071207    7769 delete.go:124] DEMOLISHING multinode-975000 ...
	I0729 04:22:03.071396    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:22:03.090144    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	W0729 04:22:03.090188    7769 stop.go:83] unable to get state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:03.090206    7769 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:03.090564    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:22:03.107419    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:22:03.107481    7769 delete.go:82] Unable to get host status for multinode-975000, assuming it has already been deleted: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:03.107569    7769 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-975000
	W0729 04:22:03.124696    7769 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-975000 returned with exit code 1
	I0729 04:22:03.124736    7769 kic.go:371] could not find the container multinode-975000 to remove it. will try anyways
	I0729 04:22:03.124812    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:22:03.141673    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	W0729 04:22:03.141722    7769 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:03.141809    7769 cli_runner.go:164] Run: docker exec --privileged -t multinode-975000 /bin/bash -c "sudo init 0"
	W0729 04:22:03.158505    7769 cli_runner.go:211] docker exec --privileged -t multinode-975000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 04:22:03.158537    7769 oci.go:650] error shutdown multinode-975000: docker exec --privileged -t multinode-975000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:04.160962    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:22:04.180798    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:22:04.180842    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:04.180851    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:22:04.180876    7769 retry.go:31] will retry after 611.122501ms: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:04.793604    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:22:04.812852    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:22:04.812906    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:04.812919    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:22:04.812939    7769 retry.go:31] will retry after 812.843048ms: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:05.628111    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:22:05.648161    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:22:05.648217    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:05.648230    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:22:05.648253    7769 retry.go:31] will retry after 1.648473525s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:07.299017    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:22:07.318799    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:22:07.318863    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:07.318876    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:22:07.318899    7769 retry.go:31] will retry after 1.26693979s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:08.588217    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:22:08.607480    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:22:08.607524    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:08.607536    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:22:08.607566    7769 retry.go:31] will retry after 3.269717034s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:11.879645    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:22:11.899190    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:22:11.899239    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:11.899256    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:22:11.899279    7769 retry.go:31] will retry after 3.965022836s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:15.864527    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:22:15.884193    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:22:15.884236    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:15.884244    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:22:15.884271    7769 retry.go:31] will retry after 7.41782404s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:23.302454    7769 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:22:23.324791    7769 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:22:23.324838    7769 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:22:23.324849    7769 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:22:23.324893    7769 oci.go:88] couldn't shut down multinode-975000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	 
	I0729 04:22:23.324977    7769 cli_runner.go:164] Run: docker rm -f -v multinode-975000
	I0729 04:22:23.347774    7769 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-975000
	W0729 04:22:23.366018    7769 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-975000 returned with exit code 1
	I0729 04:22:23.366128    7769 cli_runner.go:164] Run: docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:22:23.384827    7769 cli_runner.go:164] Run: docker network rm multinode-975000
	I0729 04:22:23.464787    7769 fix.go:124] Sleeping 1 second for extra luck!
	I0729 04:22:24.464933    7769 start.go:125] createHost starting for "" (driver="docker")
	I0729 04:22:24.489087    7769 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0729 04:22:24.489257    7769 start.go:159] libmachine.API.Create for "multinode-975000" (driver="docker")
	I0729 04:22:24.489287    7769 client.go:168] LocalClient.Create starting
	I0729 04:22:24.489510    7769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 04:22:24.489606    7769 main.go:141] libmachine: Decoding PEM data...
	I0729 04:22:24.489632    7769 main.go:141] libmachine: Parsing certificate...
	I0729 04:22:24.489717    7769 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 04:22:24.489792    7769 main.go:141] libmachine: Decoding PEM data...
	I0729 04:22:24.489817    7769 main.go:141] libmachine: Parsing certificate...
	I0729 04:22:24.490512    7769 cli_runner.go:164] Run: docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 04:22:24.509812    7769 cli_runner.go:211] docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 04:22:24.509918    7769 network_create.go:284] running [docker network inspect multinode-975000] to gather additional debugging logs...
	I0729 04:22:24.509937    7769 cli_runner.go:164] Run: docker network inspect multinode-975000
	W0729 04:22:24.527313    7769 cli_runner.go:211] docker network inspect multinode-975000 returned with exit code 1
	I0729 04:22:24.527342    7769 network_create.go:287] error running [docker network inspect multinode-975000]: docker network inspect multinode-975000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-975000 not found
	I0729 04:22:24.527361    7769 network_create.go:289] output of [docker network inspect multinode-975000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-975000 not found
	
	** /stderr **
	I0729 04:22:24.527504    7769 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:22:24.547437    7769 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:22:24.549209    7769 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:22:24.551072    7769 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:22:24.551730    7769 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00152dd40}
	I0729 04:22:24.551752    7769 network_create.go:124] attempt to create docker network multinode-975000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0729 04:22:24.551880    7769 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-975000 multinode-975000
	W0729 04:22:24.571455    7769 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-975000 multinode-975000 returned with exit code 1
	W0729 04:22:24.571489    7769 network_create.go:149] failed to create docker network multinode-975000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-975000 multinode-975000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0729 04:22:24.571507    7769 network_create.go:116] failed to create docker network multinode-975000 192.168.76.0/24, will retry: subnet is taken
	I0729 04:22:24.573005    7769 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:22:24.573378    7769 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001470440}
	I0729 04:22:24.573390    7769 network_create.go:124] attempt to create docker network multinode-975000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0729 04:22:24.573458    7769 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-975000 multinode-975000
	I0729 04:22:24.636680    7769 network_create.go:108] docker network multinode-975000 192.168.85.0/24 created
	I0729 04:22:24.636712    7769 kic.go:121] calculated static IP "192.168.85.2" for the "multinode-975000" container
	I0729 04:22:24.636836    7769 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 04:22:24.654785    7769 cli_runner.go:164] Run: docker volume create multinode-975000 --label name.minikube.sigs.k8s.io=multinode-975000 --label created_by.minikube.sigs.k8s.io=true
	I0729 04:22:24.672050    7769 oci.go:103] Successfully created a docker volume multinode-975000
	I0729 04:22:24.672171    7769 cli_runner.go:164] Run: docker run --rm --name multinode-975000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-975000 --entrypoint /usr/bin/test -v multinode-975000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 04:22:24.922069    7769 oci.go:107] Successfully prepared a docker volume multinode-975000
	I0729 04:22:24.922114    7769 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:22:24.922140    7769 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 04:22:24.922254    7769 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-975000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 04:28:24.485575    7769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 04:28:24.485703    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:24.504832    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:24.504945    7769 retry.go:31] will retry after 353.273422ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:24.946913    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:24.965176    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:24.965287    7769 retry.go:31] will retry after 354.147936ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:25.319821    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:25.339192    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:25.339286    7769 retry.go:31] will retry after 555.456622ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:25.895092    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:25.915245    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:25.915350    7769 retry.go:31] will retry after 505.428285ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:26.422517    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:26.441904    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:28:26.442032    7769 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:28:26.442056    7769 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:26.442120    7769 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 04:28:26.442189    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:26.460225    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:26.460323    7769 retry.go:31] will retry after 245.52535ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:26.708305    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:26.727882    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:26.727975    7769 retry.go:31] will retry after 557.532999ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:27.287377    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:27.306601    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:27.306702    7769 retry.go:31] will retry after 652.034637ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:27.959756    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:27.978890    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:28:27.978994    7769 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:28:27.979009    7769 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:27.979020    7769 start.go:128] duration metric: took 6m3.433847092s to createHost
	I0729 04:28:27.979090    7769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 04:28:27.979150    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:27.996738    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:27.996831    7769 retry.go:31] will retry after 148.982179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:28.146328    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:28.165172    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:28.165281    7769 retry.go:31] will retry after 248.707195ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:28.416395    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:28.436310    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:28.436409    7769 retry.go:31] will retry after 346.896091ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:28.785769    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:28.805102    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:28.805196    7769 retry.go:31] will retry after 970.634839ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:29.778268    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:29.797889    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:28:29.797991    7769 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:28:29.798006    7769 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:29.798074    7769 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 04:28:29.798133    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:29.815884    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:29.815979    7769 retry.go:31] will retry after 356.990441ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:30.175311    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:30.195421    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:30.195516    7769 retry.go:31] will retry after 426.456244ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:30.624394    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:30.645219    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:30.645311    7769 retry.go:31] will retry after 427.022594ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:31.072604    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:31.092714    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	I0729 04:28:31.092807    7769 retry.go:31] will retry after 669.13446ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:31.763424    7769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000
	W0729 04:28:31.783418    7769 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000 returned with exit code 1
	W0729 04:28:31.783519    7769 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	W0729 04:28:31.783537    7769 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-975000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-975000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:31.783548    7769 fix.go:56] duration metric: took 6m28.718866463s for fixHost
	I0729 04:28:31.783556    7769 start.go:83] releasing machines lock for "multinode-975000", held for 6m28.718910703s
	W0729 04:28:31.783630    7769 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-975000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-975000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0729 04:28:31.828147    7769 out.go:177] 
	W0729 04:28:31.850294    7769 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0729 04:28:31.850337    7769 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0729 04:28:31.850372    7769 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0729 04:28:31.893223    7769 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-975000" : exit status 52
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-975000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "1545e9d99b4f5e90d25a5afa23f94e828b7e7ca014e8bdae416637eac57bf299",
	        "Created": "2024-07-29T11:22:24.588916356Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (73.901262ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:28:32.122379    7969 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (792.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 node delete m03: exit status 80 (160.058386ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node multinode-975000 host status: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-amd64 -p multinode-975000 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status --alsologtostderr: exit status 7 (74.389456ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:28:32.337607    7975 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:28:32.338171    7975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:28:32.338186    7975 out.go:304] Setting ErrFile to fd 2...
	I0729 04:28:32.338192    7975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:28:32.338830    7975 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:28:32.339022    7975 out.go:298] Setting JSON to false
	I0729 04:28:32.339042    7975 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:28:32.339082    7975 notify.go:220] Checking for updates...
	I0729 04:28:32.339296    7975 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:28:32.339309    7975 status.go:255] checking status of multinode-975000 ...
	I0729 04:28:32.339700    7975 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:32.357009    7975 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:28:32.357053    7975 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:28:32.357073    7975 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:28:32.357102    7975 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:28:32.357110    7975 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-975000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "1545e9d99b4f5e90d25a5afa23f94e828b7e7ca014e8bdae416637eac57bf299",
	        "Created": "2024-07-29T11:22:24.588916356Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (73.319053ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:28:32.451941    7979 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (12.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 stop: exit status 82 (12.414032691s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-975000"  ...
	* Stopping node "multinode-975000"  ...
	* Stopping node "multinode-975000"  ...
	* Stopping node "multinode-975000"  ...
	* Stopping node "multinode-975000"  ...
	* Stopping node "multinode-975000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-975000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-darwin-amd64 -p multinode-975000 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status: exit status 7 (77.227624ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:28:44.943446    7992 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:28:44.943457    7992 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-975000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-975000 status --alsologtostderr: exit status 7 (74.252519ms)

                                                
                                                
-- stdout --
	multinode-975000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:28:44.998733    7995 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:28:44.998944    7995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:28:44.998949    7995 out.go:304] Setting ErrFile to fd 2...
	I0729 04:28:44.998953    7995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:28:44.999129    7995 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:28:44.999303    7995 out.go:298] Setting JSON to false
	I0729 04:28:44.999324    7995 mustload.go:65] Loading cluster: multinode-975000
	I0729 04:28:44.999358    7995 notify.go:220] Checking for updates...
	I0729 04:28:44.999588    7995 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:28:44.999603    7995 status.go:255] checking status of multinode-975000 ...
	I0729 04:28:45.000003    7995 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:45.017705    7995 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:28:45.017819    7995 status.go:330] multinode-975000 host status = "" (err=state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	)
	I0729 04:28:45.017840    7995 status.go:257] multinode-975000 status: &{Name:multinode-975000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 04:28:45.017864    7995 status.go:260] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	E0729 04:28:45.017870    7995 status.go:263] The "multinode-975000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-975000 status --alsologtostderr": multinode-975000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-975000 status --alsologtostderr": multinode-975000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "1545e9d99b4f5e90d25a5afa23f94e828b7e7ca014e8bdae416637eac57bf299",
	        "Created": "2024-07-29T11:22:24.588916356Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (72.375305ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:28:45.110929    7999 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (12.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (101.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-975000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0729 04:28:53.765504    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:29:14.314993    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-975000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (1m41.297361766s)

                                                
                                                
-- stdout --
	* [multinode-975000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-975000" primary control-plane node in "multinode-975000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* docker "multinode-975000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 04:28:45.165465    8002 out.go:291] Setting OutFile to fd 1 ...
	I0729 04:28:45.166248    8002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:28:45.166259    8002 out.go:304] Setting ErrFile to fd 2...
	I0729 04:28:45.166263    8002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 04:28:45.166904    8002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 04:28:45.168588    8002 out.go:298] Setting JSON to false
	I0729 04:28:45.191178    8002 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5295,"bootTime":1722247230,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 04:28:45.191278    8002 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 04:28:45.212930    8002 out.go:177] * [multinode-975000] minikube v1.33.1 on Darwin 14.5
	I0729 04:28:45.254786    8002 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 04:28:45.254859    8002 notify.go:220] Checking for updates...
	I0729 04:28:45.297655    8002 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	I0729 04:28:45.318680    8002 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 04:28:45.339965    8002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 04:28:45.361948    8002 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	I0729 04:28:45.382656    8002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 04:28:45.404471    8002 config.go:182] Loaded profile config "multinode-975000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 04:28:45.405194    8002 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 04:28:45.429079    8002 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 04:28:45.429245    8002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 04:28:45.509449    8002 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:122 SystemTime:2024-07-29 11:28:45.414275076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 04:28:45.552003    8002 out.go:177] * Using the docker driver based on existing profile
	I0729 04:28:45.573129    8002 start.go:297] selected driver: docker
	I0729 04:28:45.573186    8002 start.go:901] validating driver "docker" against &{Name:multinode-975000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-975000 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:28:45.573302    8002 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 04:28:45.573510    8002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 04:28:45.654866    8002 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:122 SystemTime:2024-07-29 11:28:45.560218513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-
0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-d
esktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plu
gins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 04:28:45.657918    8002 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 04:28:45.657985    8002 cni.go:84] Creating CNI manager for ""
	I0729 04:28:45.657994    8002 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 04:28:45.658061    8002 start.go:340] cluster config:
	{Name:multinode-975000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-975000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 04:28:45.701074    8002 out.go:177] * Starting "multinode-975000" primary control-plane node in "multinode-975000" cluster
	I0729 04:28:45.722201    8002 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 04:28:45.743933    8002 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 04:28:45.786218    8002 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:28:45.786282    8002 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 04:28:45.786300    8002 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 04:28:45.786322    8002 cache.go:56] Caching tarball of preloaded images
	I0729 04:28:45.786529    8002 preload.go:172] Found /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0729 04:28:45.786548    8002 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 04:28:45.787527    8002 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/multinode-975000/config.json ...
	W0729 04:28:45.812573    8002 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 04:28:45.812585    8002 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 04:28:45.812717    8002 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 04:28:45.812744    8002 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 04:28:45.812750    8002 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 04:28:45.812759    8002 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 04:28:45.812765    8002 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 04:28:45.815803    8002 image.go:273] response: 
	I0729 04:28:45.944101    8002 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 04:28:45.944147    8002 cache.go:194] Successfully downloaded all kic artifacts
	I0729 04:28:45.944193    8002 start.go:360] acquireMachinesLock for multinode-975000: {Name:mk56d69be69adad7dd08096217f5da7f0ad36bac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 04:28:45.944312    8002 start.go:364] duration metric: took 97.392µs to acquireMachinesLock for "multinode-975000"
	I0729 04:28:45.944342    8002 start.go:96] Skipping create...Using existing machine configuration
	I0729 04:28:45.944353    8002 fix.go:54] fixHost starting: 
	I0729 04:28:45.944627    8002 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:45.961771    8002 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:28:45.961845    8002 fix.go:112] recreateIfNeeded on multinode-975000: state= err=unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:45.961864    8002 fix.go:117] machineExists: false. err=machine does not exist
	I0729 04:28:46.003881    8002 out.go:177] * docker "multinode-975000" container is missing, will recreate.
	I0729 04:28:46.024812    8002 delete.go:124] DEMOLISHING multinode-975000 ...
	I0729 04:28:46.024910    8002 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:46.041764    8002 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	W0729 04:28:46.041818    8002 stop.go:83] unable to get state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:46.041832    8002 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:46.042202    8002 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:46.059211    8002 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:28:46.059260    8002 delete.go:82] Unable to get host status for multinode-975000, assuming it has already been deleted: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:46.059364    8002 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-975000
	W0729 04:28:46.076259    8002 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-975000 returned with exit code 1
	I0729 04:28:46.076293    8002 kic.go:371] could not find the container multinode-975000 to remove it. will try anyways
	I0729 04:28:46.076369    8002 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:46.093644    8002 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	W0729 04:28:46.093691    8002 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:46.093804    8002 cli_runner.go:164] Run: docker exec --privileged -t multinode-975000 /bin/bash -c "sudo init 0"
	W0729 04:28:46.110753    8002 cli_runner.go:211] docker exec --privileged -t multinode-975000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0729 04:28:46.110782    8002 oci.go:650] error shutdown multinode-975000: docker exec --privileged -t multinode-975000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:47.111613    8002 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:47.128718    8002 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:28:47.128764    8002 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:47.128774    8002 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:28:47.128811    8002 retry.go:31] will retry after 512.122797ms: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:47.643066    8002 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:47.660216    8002 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:28:47.660262    8002 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:47.660275    8002 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:28:47.660300    8002 retry.go:31] will retry after 769.41068ms: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:48.429987    8002 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:48.447071    8002 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:28:48.447112    8002 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:48.447123    8002 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:28:48.447149    8002 retry.go:31] will retry after 898.130236ms: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:49.345906    8002 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:49.363043    8002 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:28:49.363091    8002 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:49.363105    8002 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:28:49.363130    8002 retry.go:31] will retry after 963.853979ms: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:50.327329    8002 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:50.344467    8002 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:28:50.344510    8002 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:50.344518    8002 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:28:50.344552    8002 retry.go:31] will retry after 3.196543305s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:53.541346    8002 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:53.558473    8002 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:28:53.558516    8002 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:53.558525    8002 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:28:53.558551    8002 retry.go:31] will retry after 2.8607694s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:56.420234    8002 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:28:56.439726    8002 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:28:56.439768    8002 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:28:56.439778    8002 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:28:56.439803    8002 retry.go:31] will retry after 8.411272453s: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:29:04.852407    8002 cli_runner.go:164] Run: docker container inspect multinode-975000 --format={{.State.Status}}
	W0729 04:29:04.871974    8002 cli_runner.go:211] docker container inspect multinode-975000 --format={{.State.Status}} returned with exit code 1
	I0729 04:29:04.872028    8002 oci.go:662] temporary error verifying shutdown: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	I0729 04:29:04.872038    8002 oci.go:664] temporary error: container multinode-975000 status is  but expect it to be exited
	I0729 04:29:04.872070    8002 oci.go:88] couldn't shut down multinode-975000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000
	 
	I0729 04:29:04.872154    8002 cli_runner.go:164] Run: docker rm -f -v multinode-975000
	I0729 04:29:04.890118    8002 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-975000
	W0729 04:29:04.907542    8002 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-975000 returned with exit code 1
	I0729 04:29:04.907664    8002 cli_runner.go:164] Run: docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:29:04.925658    8002 cli_runner.go:164] Run: docker network rm multinode-975000
	I0729 04:29:05.004810    8002 fix.go:124] Sleeping 1 second for extra luck!
	I0729 04:29:06.007019    8002 start.go:125] createHost starting for "" (driver="docker")
	I0729 04:29:06.029177    8002 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0729 04:29:06.029368    8002 start.go:159] libmachine.API.Create for "multinode-975000" (driver="docker")
	I0729 04:29:06.029407    8002 client.go:168] LocalClient.Create starting
	I0729 04:29:06.029624    8002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/ca.pem
	I0729 04:29:06.029725    8002 main.go:141] libmachine: Decoding PEM data...
	I0729 04:29:06.029757    8002 main.go:141] libmachine: Parsing certificate...
	I0729 04:29:06.029862    8002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19337-1372/.minikube/certs/cert.pem
	I0729 04:29:06.029940    8002 main.go:141] libmachine: Decoding PEM data...
	I0729 04:29:06.029960    8002 main.go:141] libmachine: Parsing certificate...
	I0729 04:29:06.051719    8002 cli_runner.go:164] Run: docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 04:29:06.071217    8002 cli_runner.go:211] docker network inspect multinode-975000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 04:29:06.071315    8002 network_create.go:284] running [docker network inspect multinode-975000] to gather additional debugging logs...
	I0729 04:29:06.071340    8002 cli_runner.go:164] Run: docker network inspect multinode-975000
	W0729 04:29:06.088374    8002 cli_runner.go:211] docker network inspect multinode-975000 returned with exit code 1
	I0729 04:29:06.088401    8002 network_create.go:287] error running [docker network inspect multinode-975000]: docker network inspect multinode-975000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-975000 not found
	I0729 04:29:06.088413    8002 network_create.go:289] output of [docker network inspect multinode-975000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-975000 not found
	
	** /stderr **
	I0729 04:29:06.088550    8002 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 04:29:06.107541    8002 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:29:06.109026    8002 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0729 04:29:06.109425    8002 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015eafe0}
	I0729 04:29:06.109448    8002 network_create.go:124] attempt to create docker network multinode-975000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0729 04:29:06.109541    8002 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-975000 multinode-975000
	I0729 04:29:06.172186    8002 network_create.go:108] docker network multinode-975000 192.168.67.0/24 created
	I0729 04:29:06.172227    8002 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-975000" container
	I0729 04:29:06.172327    8002 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 04:29:06.190607    8002 cli_runner.go:164] Run: docker volume create multinode-975000 --label name.minikube.sigs.k8s.io=multinode-975000 --label created_by.minikube.sigs.k8s.io=true
	I0729 04:29:06.207965    8002 oci.go:103] Successfully created a docker volume multinode-975000
	I0729 04:29:06.208073    8002 cli_runner.go:164] Run: docker run --rm --name multinode-975000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-975000 --entrypoint /usr/bin/test -v multinode-975000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 04:29:06.454782    8002 oci.go:107] Successfully prepared a docker volume multinode-975000
	I0729 04:29:06.454830    8002 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 04:29:06.454846    8002 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 04:29:06.454964    8002 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-975000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-975000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-975000
helpers_test.go:235: (dbg) docker inspect multinode-975000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-975000",
	        "Id": "69a5d6af043041b242cfdbe9457e1b03700351d0357b640b870e1807fde5da26",
	        "Created": "2024-07-29T11:29:06.125116239Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-975000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-975000 -n multinode-975000: exit status 7 (73.608015ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:30:26.506189    8330 status.go:249] status error: host: state: unknown state "multinode-975000": docker container inspect multinode-975000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-975000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-975000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (101.40s)

                                                
                                    
x
+
TestScheduledStopUnix (300.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-400000 --memory=2048 --driver=docker 
E0729 04:33:53.762113    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:34:14.311591    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 04:35:16.821031    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-400000 --memory=2048 --driver=docker : signal: killed (5m0.00342863s)

                                                
                                                
-- stdout --
	* [scheduled-stop-400000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-400000" primary control-plane node in "scheduled-stop-400000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-400000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "scheduled-stop-400000" primary control-plane node in "scheduled-stop-400000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-29 04:36:57.297078 -0700 PDT m=+4647.720765458
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-400000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-400000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-400000",
	        "Id": "edbd212a39a940c86e876a82ea2e63085d753cf521876ba75fd008365a0e5b24",
	        "Created": "2024-07-29T11:31:58.271168966Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-400000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-400000 -n scheduled-stop-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-400000 -n scheduled-stop-400000: exit status 7 (74.175831ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:36:57.392192    8684 status.go:249] status error: host: state: unknown state "scheduled-stop-400000": docker container inspect scheduled-stop-400000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-400000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-400000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-400000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-400000
--- FAIL: TestScheduledStopUnix (300.53s)

                                                
                                    
x
+
TestSkaffold (300.55s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1169184806 version
skaffold_test.go:59: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1169184806 version: (1.711045583s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-743000 --memory=2600 --driver=docker 
E0729 04:38:53.759093    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:39:14.307635    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 04:40:37.360511    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-743000 --memory=2600 --driver=docker : signal: killed (4m57.743701302s)

                                                
                                                
-- stdout --
	* [skaffold-743000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-743000" primary control-plane node in "skaffold-743000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-743000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "skaffold-743000" primary control-plane node in "skaffold-743000" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-07-29 04:41:57.828829 -0700 PDT m=+4948.256446143
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-743000
helpers_test.go:235: (dbg) docker inspect skaffold-743000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-743000",
	        "Id": "75d30240d4fd36251e6fbb97b1956b768ea34a5f71d1d4af5aaa4214cd929212",
	        "Created": "2024-07-29T11:37:01.130511318Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-743000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-743000 -n skaffold-743000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-743000 -n skaffold-743000: exit status 7 (73.096123ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 04:41:57.922061    8771 status.go:249] status error: host: state: unknown state "skaffold-743000": docker container inspect skaffold-743000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-743000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-743000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-743000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-743000
--- FAIL: TestSkaffold (300.55s)

                                                
                                    
x
+
TestInsufficientStorage (300.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-968000 --memory=2048 --output=json --wait=true --driver=docker 
E0729 04:43:53.754128    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 04:44:14.303489    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-968000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.004858794s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cc1bface-2c08-489d-a247-a2f5bb71679d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-968000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4dd8622d-48a7-4a86-87ce-c5674c6209d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19337"}}
	{"specversion":"1.0","id":"fbc9f991-d54c-4760-a0e0-7acdbec22eb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig"}}
	{"specversion":"1.0","id":"9dfd6972-97e1-4b7d-8b89-8adb3a46f430","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"3aa820fc-adfd-4336-a29b-afa09d122da8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"280a6ad9-eb31-42fe-8afd-5d79b920e0cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube"}}
	{"specversion":"1.0","id":"4fe30da8-83f2-4cd5-baa6-120582bfa814","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"da6f29a4-8653-4214-983e-4e1c8ae3c078","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b09a2958-ae42-42da-a1d8-ff99e4645387","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0c8aa6d5-184b-4dfe-889e-df9bc5edad47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"46b17ac7-b8bf-4834-836a-ea8d8eac73a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"cc62fb8c-a217-4488-bf13-9427b57aafe7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-968000\" primary control-plane node in \"insufficient-storage-968000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cdd8a538-e7a4-4c9e-92f7-a5e3f2bb47a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721902582-19326 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9d05f5b-1310-4e40-9932-870d97c76a9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-968000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-968000 --output=json --layout=cluster: context deadline exceeded (767ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-968000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-968000
--- FAIL: TestInsufficientStorage (300.45s)

                                                
                                    

Test pass (171/212)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.99
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.34
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.30.3/json-events 12.02
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.29
18 TestDownloadOnly/v1.30.3/DeleteAll 0.34
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-beta.0/json-events 8.41
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.34
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.21
29 TestDownloadOnlyKic 1.54
30 TestBinaryMirror 1.35
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.17
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.17
36 TestAddons/Setup 226.51
38 TestAddons/serial/Volcano 29.62
40 TestAddons/serial/GCPAuth/Namespaces 0.11
44 TestAddons/parallel/InspektorGadget 10.67
45 TestAddons/parallel/MetricsServer 5.65
46 TestAddons/parallel/HelmTiller 10.78
48 TestAddons/parallel/CSI 44.2
49 TestAddons/parallel/Headlamp 17.71
50 TestAddons/parallel/CloudSpanner 5.9
51 TestAddons/parallel/LocalPath 21.3
52 TestAddons/parallel/NvidiaDevicePlugin 6.5
53 TestAddons/parallel/Yakd 11.73
54 TestAddons/StoppedEnableDisable 11.53
62 TestHyperKitDriverInstallOrUpdate 7.56
65 TestErrorSpam/setup 21.32
66 TestErrorSpam/start 1.85
67 TestErrorSpam/status 0.8
68 TestErrorSpam/pause 1.41
69 TestErrorSpam/unpause 1.47
70 TestErrorSpam/stop 2.34
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 37.55
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 35.1
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.33
82 TestFunctional/serial/CacheCmd/cache/add_local 1.41
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
87 TestFunctional/serial/CacheCmd/cache/delete 0.17
88 TestFunctional/serial/MinikubeKubectlCmd 1.31
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.58
90 TestFunctional/serial/ExtraConfig 39.84
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 2.95
93 TestFunctional/serial/LogsFileCmd 3.01
94 TestFunctional/serial/InvalidService 4.11
96 TestFunctional/parallel/ConfigCmd 0.5
97 TestFunctional/parallel/DashboardCmd 12.59
98 TestFunctional/parallel/DryRun 1.71
99 TestFunctional/parallel/InternationalLanguage 0.64
100 TestFunctional/parallel/StatusCmd 0.88
105 TestFunctional/parallel/AddonsCmd 0.23
106 TestFunctional/parallel/PersistentVolumeClaim 26.3
108 TestFunctional/parallel/SSHCmd 0.54
109 TestFunctional/parallel/CpCmd 1.83
110 TestFunctional/parallel/MySQL 29.03
111 TestFunctional/parallel/FileSync 0.27
112 TestFunctional/parallel/CertSync 1.87
116 TestFunctional/parallel/NodeLabels 0.08
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.3
120 TestFunctional/parallel/License 0.61
121 TestFunctional/parallel/Version/short 0.1
122 TestFunctional/parallel/Version/components 0.48
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.37
128 TestFunctional/parallel/ImageCommands/Setup 1.73
129 TestFunctional/parallel/DockerEnv/bash 1.11
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.18
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.93
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
140 TestFunctional/parallel/ServiceCmd/DeployApp 23.16
141 TestFunctional/parallel/ServiceCmd/List 0.3
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
143 TestFunctional/parallel/ServiceCmd/HTTPS 15
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.15
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
155 TestFunctional/parallel/ServiceCmd/Format 15
156 TestFunctional/parallel/ServiceCmd/URL 15
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
158 TestFunctional/parallel/ProfileCmd/profile_list 0.39
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
160 TestFunctional/parallel/MountCmd/any-port 8.07
161 TestFunctional/parallel/MountCmd/specific-port 2.04
162 TestFunctional/parallel/MountCmd/VerifyCleanup 2.35
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 106.34
170 TestMultiControlPlane/serial/DeployApp 5.58
171 TestMultiControlPlane/serial/PingHostFromPods 1.35
172 TestMultiControlPlane/serial/AddWorkerNode 20.56
173 TestMultiControlPlane/serial/NodeLabels 0.06
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.66
175 TestMultiControlPlane/serial/CopyFile 16.05
176 TestMultiControlPlane/serial/StopSecondaryNode 11.36
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.49
178 TestMultiControlPlane/serial/RestartSecondaryNode 65.14
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.66
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 174.73
181 TestMultiControlPlane/serial/DeleteSecondaryNode 10.31
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.48
183 TestMultiControlPlane/serial/StopCluster 32.49
184 TestMultiControlPlane/serial/RestartCluster 79.91
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.47
186 TestMultiControlPlane/serial/AddSecondaryNode 37.77
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.67
190 TestImageBuild/serial/Setup 21.73
191 TestImageBuild/serial/NormalBuild 1.66
192 TestImageBuild/serial/BuildWithBuildArg 0.81
193 TestImageBuild/serial/BuildWithDockerIgnore 0.67
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.66
198 TestJSONOutput/start/Command 41.34
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.45
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.49
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 5.72
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.57
223 TestKicCustomNetwork/create_custom_network 22.08
224 TestKicCustomNetwork/use_default_bridge_network 21.63
225 TestKicExistingNetwork 22.6
226 TestKicCustomSubnet 22.47
227 TestKicStaticIP 21.64
228 TestMainNoArgs 0.08
229 TestMinikubeProfile 47.61
232 TestMountStart/serial/StartWithMountFirst 6.97
233 TestMountStart/serial/VerifyMountFirst 0.25
234 TestMountStart/serial/StartWithMountSecond 7.74
254 TestPreload 90.26
275 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 9.12
276 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.55
x
+
TestDownloadOnly/v1.20.0/json-events (10.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-410000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-410000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (10.990257817s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-410000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-410000: exit status 85 (291.186193ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-410000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |          |
	|         | -p download-only-410000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 03:19:29
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 03:19:29.596051    1911 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:19:29.596328    1911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:29.596334    1911 out.go:304] Setting ErrFile to fd 2...
	I0729 03:19:29.596338    1911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:29.596517    1911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	W0729 03:19:29.596613    1911 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19337-1372/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19337-1372/.minikube/config/config.json: no such file or directory
	I0729 03:19:29.598323    1911 out.go:298] Setting JSON to true
	I0729 03:19:29.623114    1911 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1139,"bootTime":1722247230,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 03:19:29.623212    1911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:19:29.647627    1911 out.go:97] [download-only-410000] minikube v1.33.1 on Darwin 14.5
	I0729 03:19:29.647722    1911 notify.go:220] Checking for updates...
	W0729 03:19:29.647731    1911 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 03:19:29.668739    1911 out.go:169] MINIKUBE_LOCATION=19337
	I0729 03:19:29.690564    1911 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	I0729 03:19:29.713813    1911 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 03:19:29.734621    1911 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:19:29.755595    1911 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	W0729 03:19:29.797892    1911 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 03:19:29.798395    1911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:19:29.825260    1911 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 03:19:29.825395    1911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 03:19:29.914109    1911 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-29 10:19:29.902523304 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 03:19:29.935463    1911 out.go:97] Using the docker driver based on user configuration
	I0729 03:19:29.935490    1911 start.go:297] selected driver: docker
	I0729 03:19:29.935501    1911 start.go:901] validating driver "docker" against <nil>
	I0729 03:19:29.935658    1911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 03:19:30.018646    1911 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-29 10:19:30.009984845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 03:19:30.018842    1911 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:19:30.022956    1911 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0729 03:19:30.023263    1911 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:19:30.044762    1911 out.go:169] Using Docker Desktop driver with root privileges
	I0729 03:19:30.065775    1911 cni.go:84] Creating CNI manager for ""
	I0729 03:19:30.065815    1911 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0729 03:19:30.065976    1911 start.go:340] cluster config:
	{Name:download-only-410000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-410000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:19:30.087679    1911 out.go:97] Starting "download-only-410000" primary control-plane node in "download-only-410000" cluster
	I0729 03:19:30.087721    1911 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 03:19:30.109744    1911 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0729 03:19:30.109797    1911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:19:30.109900    1911 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 03:19:30.127213    1911 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 03:19:30.127460    1911 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 03:19:30.127612    1911 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 03:19:30.167261    1911 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0729 03:19:30.167281    1911 cache.go:56] Caching tarball of preloaded images
	I0729 03:19:30.167565    1911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:19:30.189983    1911 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 03:19:30.190032    1911 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 03:19:30.270286    1911 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0729 03:19:32.919746    1911 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 03:19:36.068994    1911 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 03:19:36.069188    1911 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 03:19:36.619749    1911 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0729 03:19:36.619980    1911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/download-only-410000/config.json ...
	I0729 03:19:36.620005    1911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/download-only-410000/config.json: {Name:mk79442b85770125ec22f641326c724943a496a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:19:36.620284    1911 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0729 03:19:36.620580    1911 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-410000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-410000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-410000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (12.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-838000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-838000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=docker : (12.022956527s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (12.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-838000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-838000: exit status 85 (292.072401ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-410000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-410000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| delete  | -p download-only-410000        | download-only-410000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| start   | -o=json --download-only        | download-only-838000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-838000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 03:19:41
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 03:19:41.428929    1960 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:19:41.429656    1960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:41.429665    1960 out.go:304] Setting ErrFile to fd 2...
	I0729 03:19:41.429672    1960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:41.430311    1960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 03:19:41.431839    1960 out.go:298] Setting JSON to true
	I0729 03:19:41.456397    1960 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1151,"bootTime":1722247230,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 03:19:41.456483    1960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:19:41.477450    1960 out.go:97] [download-only-838000] minikube v1.33.1 on Darwin 14.5
	I0729 03:19:41.477680    1960 notify.go:220] Checking for updates...
	I0729 03:19:41.499551    1960 out.go:169] MINIKUBE_LOCATION=19337
	I0729 03:19:41.521421    1960 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	I0729 03:19:41.542433    1960 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 03:19:41.563693    1960 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:19:41.585290    1960 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	W0729 03:19:41.627650    1960 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 03:19:41.628132    1960 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:19:41.652255    1960 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 03:19:41.652409    1960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 03:19:41.735799    1960 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-29 10:19:41.726902131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 03:19:41.757367    1960 out.go:97] Using the docker driver based on user configuration
	I0729 03:19:41.757412    1960 start.go:297] selected driver: docker
	I0729 03:19:41.757427    1960 start.go:901] validating driver "docker" against <nil>
	I0729 03:19:41.757666    1960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 03:19:41.842691    1960 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-29 10:19:41.831897523 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 03:19:41.842905    1960 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:19:41.845687    1960 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0729 03:19:41.845817    1960 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:19:41.867587    1960 out.go:169] Using Docker Desktop driver with root privileges
	I0729 03:19:41.888682    1960 cni.go:84] Creating CNI manager for ""
	I0729 03:19:41.888724    1960 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:19:41.888742    1960 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:19:41.888860    1960 start.go:340] cluster config:
	{Name:download-only-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:19:41.910421    1960 out.go:97] Starting "download-only-838000" primary control-plane node in "download-only-838000" cluster
	I0729 03:19:41.910464    1960 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 03:19:41.931322    1960 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0729 03:19:41.931413    1960 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:19:41.931504    1960 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 03:19:41.949928    1960 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 03:19:41.950112    1960 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 03:19:41.950131    1960 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 03:19:41.950138    1960 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 03:19:41.950145    1960 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 03:19:41.985390    1960 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 03:19:41.985414    1960 cache.go:56] Caching tarball of preloaded images
	I0729 03:19:41.985722    1960 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:19:42.007633    1960 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 03:19:42.007660    1960 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0729 03:19:42.081776    1960 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0729 03:19:48.832291    1960 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0729 03:19:48.832491    1960 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0729 03:19:49.323711    1960 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0729 03:19:49.323962    1960 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/download-only-838000/config.json ...
	I0729 03:19:49.323989    1960 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/download-only-838000/config.json: {Name:mk5ce3337f358cb1f2df22c0ee47b9f5cb76492a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:19:49.324699    1960 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0729 03:19:49.324969    1960 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/darwin/amd64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-838000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-838000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-838000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (8.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-415000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-415000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker : (8.411003434s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (8.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-415000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-415000: exit status 85 (294.078443ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-410000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-410000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| delete  | -p download-only-410000             | download-only-410000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| start   | -o=json --download-only             | download-only-838000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-838000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| delete  | -p download-only-838000             | download-only-838000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT | 29 Jul 24 03:19 PDT |
	| start   | -o=json --download-only             | download-only-415000 | jenkins | v1.33.1 | 29 Jul 24 03:19 PDT |                     |
	|         | -p download-only-415000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 03:19:54
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 03:19:54.291585    2009 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:19:54.291752    2009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:54.291757    2009 out.go:304] Setting ErrFile to fd 2...
	I0729 03:19:54.291761    2009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:19:54.291930    2009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 03:19:54.293355    2009 out.go:298] Setting JSON to true
	I0729 03:19:54.318922    2009 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1164,"bootTime":1722247230,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 03:19:54.319036    2009 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:19:54.340008    2009 out.go:97] [download-only-415000] minikube v1.33.1 on Darwin 14.5
	I0729 03:19:54.340141    2009 notify.go:220] Checking for updates...
	I0729 03:19:54.361176    2009 out.go:169] MINIKUBE_LOCATION=19337
	I0729 03:19:54.381949    2009 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	I0729 03:19:54.403192    2009 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 03:19:54.424408    2009 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:19:54.445184    2009 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	W0729 03:19:54.487160    2009 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 03:19:54.487424    2009 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:19:54.510450    2009 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 03:19:54.510607    2009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 03:19:54.594404    2009 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-29 10:19:54.585744022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 03:19:54.616026    2009 out.go:97] Using the docker driver based on user configuration
	I0729 03:19:54.616062    2009 start.go:297] selected driver: docker
	I0729 03:19:54.616076    2009 start.go:901] validating driver "docker" against <nil>
	I0729 03:19:54.616288    2009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 03:19:54.705437    2009 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:65 SystemTime:2024-07-29 10:19:54.692225087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 03:19:54.705653    2009 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 03:19:54.708815    2009 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0729 03:19:54.708960    2009 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 03:19:54.730079    2009 out.go:169] Using Docker Desktop driver with root privileges
	I0729 03:19:54.751244    2009 cni.go:84] Creating CNI manager for ""
	I0729 03:19:54.751277    2009 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0729 03:19:54.751292    2009 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 03:19:54.751433    2009 start.go:340] cluster config:
	{Name:download-only-415000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-415000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:19:54.773013    2009 out.go:97] Starting "download-only-415000" primary control-plane node in "download-only-415000" cluster
	I0729 03:19:54.773042    2009 cache.go:121] Beginning downloading kic base image for docker with docker
	I0729 03:19:54.794140    2009 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0729 03:19:54.794185    2009 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 03:19:54.794262    2009 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 03:19:54.812791    2009 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 03:19:54.812975    2009 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 03:19:54.812994    2009 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 03:19:54.813000    2009 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 03:19:54.813007    2009 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 03:19:54.856448    2009 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0729 03:19:54.856488    2009 cache.go:56] Caching tarball of preloaded images
	I0729 03:19:54.856790    2009 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 03:19:54.878143    2009 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 03:19:54.878182    2009 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 03:19:54.956790    2009 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0729 03:19:57.892963    2009 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 03:19:57.893134    2009 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0729 03:19:58.373265    2009 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0729 03:19:58.373520    2009 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/download-only-415000/config.json ...
	I0729 03:19:58.373549    2009 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/download-only-415000/config.json: {Name:mk2a7eb4af9e194703ca66c946ccd1f968d9e827 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 03:19:58.373849    2009 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0729 03:19:58.374082    2009 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19337-1372/.minikube/cache/darwin/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-415000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-415000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-415000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-259000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-259000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-259000
--- PASS: TestDownloadOnlyKic (1.54s)

                                                
                                    
x
+
TestBinaryMirror (1.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-781000 --alsologtostderr --binary-mirror http://127.0.0.1:49352 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-781000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-781000
--- PASS: TestBinaryMirror (1.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-257000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-257000: exit status 85 (169.467273ms)

                                                
                                                
-- stdout --
	* Profile "addons-257000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-257000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-257000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-257000: exit status 85 (169.443773ms)

                                                
                                                
-- stdout --
	* Profile "addons-257000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-257000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/Setup (226.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-257000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-257000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m46.511027426s)
--- PASS: TestAddons/Setup (226.51s)

                                                
                                    
x
+
TestAddons/serial/Volcano (29.62s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 14.494752ms
addons_test.go:897: volcano-scheduler stabilized in 14.577335ms
addons_test.go:905: volcano-admission stabilized in 14.989627ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-l7vjb" [3e6bedee-e448-4ce9-abbf-3dbb7a0d8f06] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003650901s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-f75dl" [99dd3414-711a-4770-bf60-13dc2722912f] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00340703s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-sskrp" [7cad75c2-5b98-4d98-9b3f-d9495c28aeeb] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003121809s
addons_test.go:932: (dbg) Run:  kubectl --context addons-257000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-257000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-257000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4a15941f-26bf-4cb6-96be-87aba90aaef9] Pending
helpers_test.go:344: "test-job-nginx-0" [4a15941f-26bf-4cb6-96be-87aba90aaef9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [4a15941f-26bf-4cb6-96be-87aba90aaef9] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003926272s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable volcano --alsologtostderr -v=1
--- PASS: TestAddons/serial/Volcano (29.62s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-257000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-257000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-78ntn" [cfa31a10-7284-4be6-b38c-b91f1a318824] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004324626s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-257000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-257000: (5.664840774s)
--- PASS: TestAddons/parallel/InspektorGadget (10.67s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.909421ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-g4sd7" [3186cdf5-5eed-47d5-aa2e-f00951490f7e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00444817s
addons_test.go:417: (dbg) Run:  kubectl --context addons-257000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.169459ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-ktwwf" [82908429-59d8-43c5-9172-3cfc5b907c4f] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003891618s
addons_test.go:475: (dbg) Run:  kubectl --context addons-257000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-257000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.218621287s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.874771ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-257000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-257000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f41691b3-9634-4231-8a25-3773e2247808] Pending
helpers_test.go:344: "task-pv-pod" [f41691b3-9634-4231-8a25-3773e2247808] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f41691b3-9634-4231-8a25-3773e2247808] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003579876s
addons_test.go:590: (dbg) Run:  kubectl --context addons-257000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-257000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-257000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-257000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-257000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-257000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-257000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f4f9040e-596e-402c-93b7-8f5974092a2c] Pending
helpers_test.go:344: "task-pv-pod-restore" [f4f9040e-596e-402c-93b7-8f5974092a2c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f4f9040e-596e-402c-93b7-8f5974092a2c] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004366502s
addons_test.go:632: (dbg) Run:  kubectl --context addons-257000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-257000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-257000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-257000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.565815426s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-darwin-amd64 -p addons-257000 addons disable volumesnapshots --alsologtostderr -v=1: (1.299458051s)
--- PASS: TestAddons/parallel/CSI (44.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-257000 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-257000 --alsologtostderr -v=1: (1.092574116s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-ttcb5" [48cb1698-3b79-498d-bc4c-3155cc5ac093] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-ttcb5" [48cb1698-3b79-498d-bc4c-3155cc5ac093] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005569818s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-257000 addons disable headlamp --alsologtostderr -v=1: (5.607013913s)
--- PASS: TestAddons/parallel/Headlamp (17.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-5dgmx" [a8a4fa11-2e24-442f-95d5-c7b3f5dcadbe] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008512894s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-257000
--- PASS: TestAddons/parallel/CloudSpanner (5.90s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (21.3s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-257000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-257000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [92c64a90-4b80-4a17-980b-a889e412141d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [92c64a90-4b80-4a17-980b-a889e412141d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [92c64a90-4b80-4a17-980b-a889e412141d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005644274s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-257000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 ssh "cat /opt/local-path-provisioner/pvc-8995b281-e343-497b-9f9e-95be2ce7190e_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-257000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-257000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (21.30s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mspmf" [27e83f27-7b99-4426-be53-52212ee39d0f] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003860957s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-257000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-c9wz2" [7c25b03b-2c61-4602-a91f-e59785f56910] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003743563s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-257000 addons disable yakd --alsologtostderr -v=1: (5.72506689s)
--- PASS: TestAddons/parallel/Yakd (11.73s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-257000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-257000: (10.978965803s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-257000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-257000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-257000
--- PASS: TestAddons/StoppedEnableDisable (11.53s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.56s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.56s)

                                                
                                    
x
+
TestErrorSpam/setup (21.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-040000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-040000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 --driver=docker : (21.320441682s)
--- PASS: TestErrorSpam/setup (21.32s)

                                                
                                    
x
+
TestErrorSpam/start (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 start --dry-run
--- PASS: TestErrorSpam/start (1.85s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 pause
--- PASS: TestErrorSpam/pause (1.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (2.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 stop: (1.866078624s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-040000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-040000 stop
--- PASS: TestErrorSpam/stop (2.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19337-1372/.minikube/files/etc/test/nested/copy/1909/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-818000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-818000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (37.544443983s)
--- PASS: TestFunctional/serial/StartWithProxy (37.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-818000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-818000 --alsologtostderr -v=8: (35.09815438s)
functional_test.go:659: soft start took 35.098654055s for "functional-818000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-818000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-818000 cache add registry.k8s.io/pause:3.1: (1.129372192s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-818000 cache add registry.k8s.io/pause:3.3: (1.143663725s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-818000 cache add registry.k8s.io/pause:latest: (1.056510939s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-818000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2559346963/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 cache add minikube-local-cache-test:functional-818000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 cache delete minikube-local-cache-test:functional-818000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-818000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-818000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (255.929607ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 kubectl -- --context functional-818000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-818000 kubectl -- --context functional-818000 get pods: (1.312961999s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-818000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-818000 get pods: (1.57676537s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.58s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-818000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0729 03:28:53.686889    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:28:53.696556    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:28:53.708690    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:28:53.729569    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:28:53.769826    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:28:53.852010    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:28:54.012162    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:28:54.334166    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:28:54.974894    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:28:56.255023    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:28:58.815557    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-818000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.835719219s)
functional_test.go:757: restart took 39.835807917s for "functional-818000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-818000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-818000 logs: (2.949183969s)
--- PASS: TestFunctional/serial/LogsCmd (2.95s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd546562470/001/logs.txt
E0729 03:29:03.935935    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-818000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd546562470/001/logs.txt: (3.012191513s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-818000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-818000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-818000: exit status 115 (384.576294ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31004 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-818000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-818000 config get cpus: exit status 14 (62.912562ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-818000 config get cpus: exit status 14 (59.369453ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-818000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-818000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3877: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-818000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-818000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (921.693477ms)

                                                
                                                
-- stdout --
	* [functional-818000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:30:29.316752    3787 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:30:29.317369    3787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:29.317381    3787 out.go:304] Setting ErrFile to fd 2...
	I0729 03:30:29.317389    3787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:29.317794    3787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 03:30:29.340062    3787 out.go:298] Setting JSON to false
	I0729 03:30:29.364247    3787 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1799,"bootTime":1722247230,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 03:30:29.364331    3787 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:30:29.424580    3787 out.go:177] * [functional-818000] minikube v1.33.1 on Darwin 14.5
	I0729 03:30:29.487799    3787 notify.go:220] Checking for updates...
	I0729 03:30:29.508590    3787 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:30:29.571494    3787 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	I0729 03:30:29.634567    3787 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 03:30:29.697436    3787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:30:29.739414    3787 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	I0729 03:30:29.781578    3787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:30:29.804958    3787 config.go:182] Loaded profile config "functional-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:30:29.805594    3787 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:30:29.874073    3787 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 03:30:29.874292    3787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 03:30:29.961047    3787 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:72 SystemTime:2024-07-29 10:30:29.951911366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 03:30:30.003393    3787 out.go:177] * Using the docker driver based on existing profile
	I0729 03:30:30.024613    3787 start.go:297] selected driver: docker
	I0729 03:30:30.024637    3787 start.go:901] validating driver "docker" against &{Name:functional-818000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-818000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:30:30.024761    3787 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:30:30.068552    3787 out.go:177] 
	W0729 03:30:30.089594    3787 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 03:30:30.110627    3787 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-818000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-818000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-818000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (635.86988ms)

                                                
                                                
-- stdout --
	* [functional-818000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19337
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:30:30.978400    3854 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:30:30.978606    3854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:30.978612    3854 out.go:304] Setting ErrFile to fd 2...
	I0729 03:30:30.978616    3854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:30:30.978893    3854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 03:30:30.981143    3854 out.go:298] Setting JSON to false
	I0729 03:30:31.004822    3854 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1800,"bootTime":1722247230,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0729 03:30:31.004913    3854 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0729 03:30:31.026782    3854 out.go:177] * [functional-818000] minikube v1.33.1 sur Darwin 14.5
	I0729 03:30:31.068831    3854 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 03:30:31.068897    3854 notify.go:220] Checking for updates...
	I0729 03:30:31.111638    3854 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
	I0729 03:30:31.132567    3854 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0729 03:30:31.153569    3854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 03:30:31.174747    3854 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube
	I0729 03:30:31.195831    3854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 03:30:31.216853    3854 config.go:182] Loaded profile config "functional-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:30:31.217216    3854 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 03:30:31.240670    3854 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0729 03:30:31.240840    3854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 03:30:31.326857    3854 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:72 SystemTime:2024-07-29 10:30:31.316638512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768077824 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0729 03:30:31.351685    3854 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0729 03:30:31.409682    3854 start.go:297] selected driver: docker
	I0729 03:30:31.409713    3854 start.go:901] validating driver "docker" against &{Name:functional-818000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-818000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 03:30:31.409848    3854 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 03:30:31.436614    3854 out.go:177] 
	W0729 03:30:31.459717    3854 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 03:30:31.484880    3854 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [94c46d1e-1311-430c-9055-44eebaa6bef0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005079411s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-818000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-818000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-818000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-818000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0f6cf83a-b2df-4cf6-96a7-9acad71700b1] Pending
helpers_test.go:344: "sp-pod" [0f6cf83a-b2df-4cf6-96a7-9acad71700b1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0f6cf83a-b2df-4cf6-96a7-9acad71700b1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004545108s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-818000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-818000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-818000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [59f5f9da-5177-48f1-9d2c-c9111b25d136] Pending
E0729 03:30:15.616401    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [59f5f9da-5177-48f1-9d2c-c9111b25d136] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [59f5f9da-5177-48f1-9d2c-c9111b25d136] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003629893s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-818000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.30s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh -n functional-818000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 cp functional-818000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd3964881460/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh -n functional-818000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh -n functional-818000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-818000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-hn72w" [609f0abe-b23f-4105-919c-bc2a5d8de8e2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-hn72w" [609f0abe-b23f-4105-919c-bc2a5d8de8e2] Running
E0729 03:29:34.656260    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.003428744s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-818000 exec mysql-64454c8b5c-hn72w -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-818000 exec mysql-64454c8b5c-hn72w -- mysql -ppassword -e "show databases;": exit status 1 (137.986199ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-818000 exec mysql-64454c8b5c-hn72w -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-818000 exec mysql-64454c8b5c-hn72w -- mysql -ppassword -e "show databases;": exit status 1 (122.962292ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-818000 exec mysql-64454c8b5c-hn72w -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-818000 exec mysql-64454c8b5c-hn72w -- mysql -ppassword -e "show databases;": exit status 1 (117.410727ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-818000 exec mysql-64454c8b5c-hn72w -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1909/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "sudo cat /etc/test/nested/copy/1909/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1909.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "sudo cat /etc/ssl/certs/1909.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1909.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "sudo cat /usr/share/ca-certificates/1909.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/19092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "sudo cat /etc/ssl/certs/19092.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/19092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "sudo cat /usr/share/ca-certificates/19092.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-818000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-818000 ssh "sudo systemctl is-active crio": exit status 1 (298.050802ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-818000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-818000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-818000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-818000 image ls --format short --alsologtostderr:
I0729 03:30:42.699663    4041 out.go:291] Setting OutFile to fd 1 ...
I0729 03:30:42.700006    4041 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:30:42.700013    4041 out.go:304] Setting ErrFile to fd 2...
I0729 03:30:42.700016    4041 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:30:42.700218    4041 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
I0729 03:30:42.700797    4041 config.go:182] Loaded profile config "functional-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:30:42.700895    4041 config.go:182] Loaded profile config "functional-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:30:42.701302    4041 cli_runner.go:164] Run: docker container inspect functional-818000 --format={{.State.Status}}
I0729 03:30:42.719993    4041 ssh_runner.go:195] Run: systemctl --version
I0729 03:30:42.720062    4041 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-818000
I0729 03:30:42.740761    4041 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50128 SSHKeyPath:/Users/jenkins/minikube-integration/19337-1372/.minikube/machines/functional-818000/id_rsa Username:docker}
I0729 03:30:42.828784    4041 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-818000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| docker.io/library/nginx                     | alpine            | 1ae23480369fa | 43.2MB |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| docker.io/library/nginx                     | latest            | a72860cb95fd5 | 188MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-818000 | b472d7b8745c4 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/kicbase/echo-server               | functional-818000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-818000 image ls --format table --alsologtostderr:
I0729 03:30:43.427067    4053 out.go:291] Setting OutFile to fd 1 ...
I0729 03:30:43.427276    4053 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:30:43.427282    4053 out.go:304] Setting ErrFile to fd 2...
I0729 03:30:43.427285    4053 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:30:43.427463    4053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
I0729 03:30:43.428089    4053 config.go:182] Loaded profile config "functional-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:30:43.428183    4053 config.go:182] Loaded profile config "functional-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:30:43.428655    4053 cli_runner.go:164] Run: docker container inspect functional-818000 --format={{.State.Status}}
I0729 03:30:43.448936    4053 ssh_runner.go:195] Run: systemctl --version
I0729 03:30:43.449056    4053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-818000
I0729 03:30:43.470350    4053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50128 SSHKeyPath:/Users/jenkins/minikube-integration/19337-1372/.minikube/machines/functional-818000/id_rsa Username:docker}
I0729 03:30:43.558720    4053 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-818000 image ls --format json --alsologtostderr:
[{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854
477ed9a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.i
o/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"b472d7b8745c471119f3a95b3fcb5dd55ebcfb67a790b9d1d09cbc94ca1d8771","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-818000"],"size":"30"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-818000"],"size":"4940000"
},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-818000 image ls --format json --alsologtostderr:
I0729 03:30:42.950528    4045 out.go:291] Setting OutFile to fd 1 ...
I0729 03:30:42.950725    4045 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:30:42.950731    4045 out.go:304] Setting ErrFile to fd 2...
I0729 03:30:42.950735    4045 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:30:42.950923    4045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
I0729 03:30:42.952348    4045 config.go:182] Loaded profile config "functional-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:30:42.952450    4045 config.go:182] Loaded profile config "functional-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:30:42.952853    4045 cli_runner.go:164] Run: docker container inspect functional-818000 --format={{.State.Status}}
I0729 03:30:42.973083    4045 ssh_runner.go:195] Run: systemctl --version
I0729 03:30:42.973167    4045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-818000
I0729 03:30:42.994755    4045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50128 SSHKeyPath:/Users/jenkins/minikube-integration/19337-1372/.minikube/machines/functional-818000/id_rsa Username:docker}
I0729 03:30:43.083793    4045 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-818000 image ls --format yaml --alsologtostderr:
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-818000
size: "4940000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: b472d7b8745c471119f3a95b3fcb5dd55ebcfb67a790b9d1d09cbc94ca1d8771
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-818000
size: "30"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-818000 image ls --format yaml --alsologtostderr:
I0729 03:30:43.189368    4049 out.go:291] Setting OutFile to fd 1 ...
I0729 03:30:43.189749    4049 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:30:43.189755    4049 out.go:304] Setting ErrFile to fd 2...
I0729 03:30:43.189759    4049 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:30:43.189947    4049 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
I0729 03:30:43.190578    4049 config.go:182] Loaded profile config "functional-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:30:43.190673    4049 config.go:182] Loaded profile config "functional-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:30:43.191086    4049 cli_runner.go:164] Run: docker container inspect functional-818000 --format={{.State.Status}}
I0729 03:30:43.210825    4049 ssh_runner.go:195] Run: systemctl --version
I0729 03:30:43.210917    4049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-818000
I0729 03:30:43.229424    4049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50128 SSHKeyPath:/Users/jenkins/minikube-integration/19337-1372/.minikube/machines/functional-818000/id_rsa Username:docker}
I0729 03:30:43.316436    4049 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-818000 ssh pgrep buildkitd: exit status 1 (228.138418ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image build -t localhost/my-image:functional-818000 testdata/build --alsologtostderr
2024/07/29 03:30:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-818000 image build -t localhost/my-image:functional-818000 testdata/build --alsologtostderr: (1.921422781s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-818000 image build -t localhost/my-image:functional-818000 testdata/build --alsologtostderr:
I0729 03:30:43.896091    4065 out.go:291] Setting OutFile to fd 1 ...
I0729 03:30:43.896535    4065 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:30:43.896541    4065 out.go:304] Setting ErrFile to fd 2...
I0729 03:30:43.896545    4065 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 03:30:43.896825    4065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
I0729 03:30:43.897576    4065 config.go:182] Loaded profile config "functional-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:30:43.898792    4065 config.go:182] Loaded profile config "functional-818000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0729 03:30:43.899190    4065 cli_runner.go:164] Run: docker container inspect functional-818000 --format={{.State.Status}}
I0729 03:30:43.959371    4065 ssh_runner.go:195] Run: systemctl --version
I0729 03:30:43.959448    4065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-818000
I0729 03:30:43.988052    4065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50128 SSHKeyPath:/Users/jenkins/minikube-integration/19337-1372/.minikube/machines/functional-818000/id_rsa Username:docker}
I0729 03:30:44.075714    4065 build_images.go:161] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.3620851815.tar
I0729 03:30:44.075804    4065 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 03:30:44.085242    4065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3620851815.tar
I0729 03:30:44.089257    4065 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3620851815.tar: stat -c "%s %y" /var/lib/minikube/build/build.3620851815.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3620851815.tar': No such file or directory
I0729 03:30:44.089291    4065 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.3620851815.tar --> /var/lib/minikube/build/build.3620851815.tar (3072 bytes)
I0729 03:30:44.112150    4065 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3620851815
I0729 03:30:44.121044    4065 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3620851815 -xf /var/lib/minikube/build/build.3620851815.tar
I0729 03:30:44.130363    4065 docker.go:360] Building image: /var/lib/minikube/build/build.3620851815
I0729 03:30:44.130490    4065 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-818000 /var/lib/minikube/build/build.3620851815
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:f91d741d7d9ec542df118b4d87d413695c83d5c658e80d03c1fef91ff70ff759 done
#8 naming to localhost/my-image:functional-818000 done
#8 DONE 0.0s
I0729 03:30:45.717007    4065 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-818000 /var/lib/minikube/build/build.3620851815: (1.586505856s)
I0729 03:30:45.717066    4065 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3620851815
I0729 03:30:45.726140    4065 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3620851815.tar
I0729 03:30:45.734468    4065 build_images.go:217] Built localhost/my-image:functional-818000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.3620851815.tar
I0729 03:30:45.734496    4065 build_images.go:133] succeeded building to: functional-818000
I0729 03:30:45.734502    4065 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.695704426s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-818000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-818000 docker-env) && out/minikube-darwin-amd64 status -p functional-818000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-818000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image load --daemon docker.io/kicbase/echo-server:functional-818000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image load --daemon docker.io/kicbase/echo-server:functional-818000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
E0729 03:29:14.176166    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-818000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image load --daemon docker.io/kicbase/echo-server:functional-818000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image save docker.io/kicbase/echo-server:functional-818000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image rm docker.io/kicbase/echo-server:functional-818000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-818000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 image save --daemon docker.io/kicbase/echo-server:functional-818000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-818000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-818000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-818000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-dmkv4" [2960f08b-2deb-47d3-8ee0-b3aa1929573f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-dmkv4" [2960f08b-2deb-47d3-8ee0-b3aa1929573f] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.003230125s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 service list -o json
functional_test.go:1490: Took "293.802433ms" to run "out/minikube-darwin-amd64 -p functional-818000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-818000 service --namespace=default --https --url hello-node: signal: killed (15.002108939s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50365

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:50365
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-818000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-818000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-818000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-818000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3382: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-818000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-818000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a32d5b74-52d7-43f4-8e42-41d6b11e516c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a32d5b74-52d7-43f4-8e42-41d6b11e516c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004717771s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-818000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-818000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3401: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-818000 service hello-node --url --format={{.IP}}: signal: killed (15.001710963s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-818000 service hello-node --url: signal: killed (15.003252428s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50440

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:50440
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "289.268551ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "97.196133ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "287.15416ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "77.918757ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-818000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3254078424/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722249028918202000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3254078424/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722249028918202000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3254078424/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722249028918202000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3254078424/001/test-1722249028918202000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-818000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (246.707699ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 10:30 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 10:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 10:30 test-1722249028918202000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh cat /mount-9p/test-1722249028918202000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-818000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [25d581ba-8ebd-400b-8c92-3eea0d115adb] Pending
helpers_test.go:344: "busybox-mount" [25d581ba-8ebd-400b-8c92-3eea0d115adb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [25d581ba-8ebd-400b-8c92-3eea0d115adb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [25d581ba-8ebd-400b-8c92-3eea0d115adb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.006469313s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-818000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-818000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3254078424/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-818000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port998176885/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-818000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.099139ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-818000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port998176885/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-818000 ssh "sudo umount -f /mount-9p": exit status 1 (245.074665ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-818000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-818000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port998176885/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-818000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup334537318/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-818000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup334537318/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-818000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup334537318/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-818000 ssh "findmnt -T" /mount1: exit status 1 (376.943236ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-818000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-818000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-818000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup334537318/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-818000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup334537318/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-818000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup334537318/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.35s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-818000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-818000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-818000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (106.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-644000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0729 03:31:37.537344    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-644000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m45.667489024s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (106.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-644000 -- rollout status deployment/busybox: (2.94752514s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-h22tl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-hrwqf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-tz7dn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-h22tl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-hrwqf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-tz7dn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-h22tl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-hrwqf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-tz7dn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-h22tl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-h22tl -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-hrwqf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-hrwqf -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-tz7dn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-644000 -- exec busybox-fc5497c4f-tz7dn -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-644000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-644000 -v=7 --alsologtostderr: (19.720968863s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-644000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp testdata/cp-test.txt ha-644000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile281205471/001/cp-test_ha-644000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000:/home/docker/cp-test.txt ha-644000-m02:/home/docker/cp-test_ha-644000_ha-644000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m02 "sudo cat /home/docker/cp-test_ha-644000_ha-644000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000:/home/docker/cp-test.txt ha-644000-m03:/home/docker/cp-test_ha-644000_ha-644000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m03 "sudo cat /home/docker/cp-test_ha-644000_ha-644000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000:/home/docker/cp-test.txt ha-644000-m04:/home/docker/cp-test_ha-644000_ha-644000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m04 "sudo cat /home/docker/cp-test_ha-644000_ha-644000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp testdata/cp-test.txt ha-644000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile281205471/001/cp-test_ha-644000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000-m02:/home/docker/cp-test.txt ha-644000:/home/docker/cp-test_ha-644000-m02_ha-644000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000 "sudo cat /home/docker/cp-test_ha-644000-m02_ha-644000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000-m02:/home/docker/cp-test.txt ha-644000-m03:/home/docker/cp-test_ha-644000-m02_ha-644000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m03 "sudo cat /home/docker/cp-test_ha-644000-m02_ha-644000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000-m02:/home/docker/cp-test.txt ha-644000-m04:/home/docker/cp-test_ha-644000-m02_ha-644000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m04 "sudo cat /home/docker/cp-test_ha-644000-m02_ha-644000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp testdata/cp-test.txt ha-644000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile281205471/001/cp-test_ha-644000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000-m03:/home/docker/cp-test.txt ha-644000:/home/docker/cp-test_ha-644000-m03_ha-644000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000 "sudo cat /home/docker/cp-test_ha-644000-m03_ha-644000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000-m03:/home/docker/cp-test.txt ha-644000-m02:/home/docker/cp-test_ha-644000-m03_ha-644000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m02 "sudo cat /home/docker/cp-test_ha-644000-m03_ha-644000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000-m03:/home/docker/cp-test.txt ha-644000-m04:/home/docker/cp-test_ha-644000-m03_ha-644000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m04 "sudo cat /home/docker/cp-test_ha-644000-m03_ha-644000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp testdata/cp-test.txt ha-644000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000-m04:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile281205471/001/cp-test_ha-644000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000-m04:/home/docker/cp-test.txt ha-644000:/home/docker/cp-test_ha-644000-m04_ha-644000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000 "sudo cat /home/docker/cp-test_ha-644000-m04_ha-644000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000-m04:/home/docker/cp-test.txt ha-644000-m02:/home/docker/cp-test_ha-644000-m04_ha-644000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m02 "sudo cat /home/docker/cp-test_ha-644000-m04_ha-644000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 cp ha-644000-m04:/home/docker/cp-test.txt ha-644000-m03:/home/docker/cp-test_ha-644000-m04_ha-644000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 ssh -n ha-644000-m03 "sudo cat /home/docker/cp-test_ha-644000-m04_ha-644000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-644000 node stop m02 -v=7 --alsologtostderr: (10.718814054s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (636.23881ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-644000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-644000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-644000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:33:30.184774    4887 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:33:30.184979    4887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:33:30.184984    4887 out.go:304] Setting ErrFile to fd 2...
	I0729 03:33:30.184988    4887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:33:30.185174    4887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 03:33:30.185364    4887 out.go:298] Setting JSON to false
	I0729 03:33:30.185386    4887 mustload.go:65] Loading cluster: ha-644000
	I0729 03:33:30.185427    4887 notify.go:220] Checking for updates...
	I0729 03:33:30.185696    4887 config.go:182] Loaded profile config "ha-644000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:33:30.185711    4887 status.go:255] checking status of ha-644000 ...
	I0729 03:33:30.186129    4887 cli_runner.go:164] Run: docker container inspect ha-644000 --format={{.State.Status}}
	I0729 03:33:30.204584    4887 status.go:330] ha-644000 host status = "Running" (err=<nil>)
	I0729 03:33:30.204617    4887 host.go:66] Checking if "ha-644000" exists ...
	I0729 03:33:30.204867    4887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-644000
	I0729 03:33:30.222892    4887 host.go:66] Checking if "ha-644000" exists ...
	I0729 03:33:30.223146    4887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 03:33:30.223212    4887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-644000
	I0729 03:33:30.241524    4887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50591 SSHKeyPath:/Users/jenkins/minikube-integration/19337-1372/.minikube/machines/ha-644000/id_rsa Username:docker}
	I0729 03:33:30.328546    4887 ssh_runner.go:195] Run: systemctl --version
	I0729 03:33:30.334039    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 03:33:30.346053    4887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-644000
	I0729 03:33:30.364832    4887 kubeconfig.go:125] found "ha-644000" server: "https://127.0.0.1:50590"
	I0729 03:33:30.364865    4887 api_server.go:166] Checking apiserver status ...
	I0729 03:33:30.364906    4887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:33:30.375615    4887 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2385/cgroup
	W0729 03:33:30.384564    4887 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 03:33:30.384617    4887 ssh_runner.go:195] Run: ls
	I0729 03:33:30.388663    4887 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50590/healthz ...
	I0729 03:33:30.392685    4887 api_server.go:279] https://127.0.0.1:50590/healthz returned 200:
	ok
	I0729 03:33:30.392707    4887 status.go:422] ha-644000 apiserver status = Running (err=<nil>)
	I0729 03:33:30.392720    4887 status.go:257] ha-644000 status: &{Name:ha-644000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 03:33:30.392734    4887 status.go:255] checking status of ha-644000-m02 ...
	I0729 03:33:30.392983    4887 cli_runner.go:164] Run: docker container inspect ha-644000-m02 --format={{.State.Status}}
	I0729 03:33:30.411199    4887 status.go:330] ha-644000-m02 host status = "Stopped" (err=<nil>)
	I0729 03:33:30.411232    4887 status.go:343] host is not running, skipping remaining checks
	I0729 03:33:30.411245    4887 status.go:257] ha-644000-m02 status: &{Name:ha-644000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 03:33:30.411263    4887 status.go:255] checking status of ha-644000-m03 ...
	I0729 03:33:30.411555    4887 cli_runner.go:164] Run: docker container inspect ha-644000-m03 --format={{.State.Status}}
	I0729 03:33:30.429757    4887 status.go:330] ha-644000-m03 host status = "Running" (err=<nil>)
	I0729 03:33:30.429801    4887 host.go:66] Checking if "ha-644000-m03" exists ...
	I0729 03:33:30.430069    4887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-644000-m03
	I0729 03:33:30.447790    4887 host.go:66] Checking if "ha-644000-m03" exists ...
	I0729 03:33:30.448053    4887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 03:33:30.448108    4887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-644000-m03
	I0729 03:33:30.467442    4887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50693 SSHKeyPath:/Users/jenkins/minikube-integration/19337-1372/.minikube/machines/ha-644000-m03/id_rsa Username:docker}
	I0729 03:33:30.553430    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 03:33:30.564883    4887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-644000
	I0729 03:33:30.583452    4887 kubeconfig.go:125] found "ha-644000" server: "https://127.0.0.1:50590"
	I0729 03:33:30.583474    4887 api_server.go:166] Checking apiserver status ...
	I0729 03:33:30.583518    4887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 03:33:30.593710    4887 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2278/cgroup
	W0729 03:33:30.604175    4887 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2278/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 03:33:30.604253    4887 ssh_runner.go:195] Run: ls
	I0729 03:33:30.608759    4887 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50590/healthz ...
	I0729 03:33:30.612622    4887 api_server.go:279] https://127.0.0.1:50590/healthz returned 200:
	ok
	I0729 03:33:30.612634    4887 status.go:422] ha-644000-m03 apiserver status = Running (err=<nil>)
	I0729 03:33:30.612643    4887 status.go:257] ha-644000-m03 status: &{Name:ha-644000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 03:33:30.612653    4887 status.go:255] checking status of ha-644000-m04 ...
	I0729 03:33:30.612897    4887 cli_runner.go:164] Run: docker container inspect ha-644000-m04 --format={{.State.Status}}
	I0729 03:33:30.630983    4887 status.go:330] ha-644000-m04 host status = "Running" (err=<nil>)
	I0729 03:33:30.631010    4887 host.go:66] Checking if "ha-644000-m04" exists ...
	I0729 03:33:30.631257    4887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-644000-m04
	I0729 03:33:30.649417    4887 host.go:66] Checking if "ha-644000-m04" exists ...
	I0729 03:33:30.649682    4887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 03:33:30.649729    4887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-644000-m04
	I0729 03:33:30.668310    4887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50820 SSHKeyPath:/Users/jenkins/minikube-integration/19337-1372/.minikube/machines/ha-644000-m04/id_rsa Username:docker}
	I0729 03:33:30.754664    4887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 03:33:30.765092    4887 status.go:257] ha-644000-m04 status: &{Name:ha-644000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (65.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 node start m02 -v=7 --alsologtostderr
E0729 03:33:53.684996    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:34:14.234164    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:34:14.239468    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:34:14.249856    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:34:14.269994    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:34:14.310141    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:34:14.390515    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:34:14.550624    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:34:14.871069    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:34:15.511803    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:34:16.793057    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:34:19.354063    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:34:21.377045    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:34:24.474748    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:34:34.716299    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-644000 node start m02 -v=7 --alsologtostderr: (1m4.241917762s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (65.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (174.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-644000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-644000 -v=7 --alsologtostderr
E0729 03:34:55.197418    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-644000 -v=7 --alsologtostderr: (33.764586159s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-644000 --wait=true -v=7 --alsologtostderr
E0729 03:35:36.158594    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
E0729 03:36:58.078811    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-644000 --wait=true -v=7 --alsologtostderr: (2m20.837140077s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-644000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (174.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-644000 node delete m03 -v=7 --alsologtostderr: (9.543009574s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-644000 stop -v=7 --alsologtostderr: (32.383060249s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-644000 status -v=7 --alsologtostderr: exit status 7 (110.975663ms)

                                                
                                                
-- stdout --
	ha-644000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-644000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-644000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 03:38:15.015131    5283 out.go:291] Setting OutFile to fd 1 ...
	I0729 03:38:15.015394    5283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:38:15.015400    5283 out.go:304] Setting ErrFile to fd 2...
	I0729 03:38:15.015403    5283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 03:38:15.015583    5283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19337-1372/.minikube/bin
	I0729 03:38:15.015795    5283 out.go:298] Setting JSON to false
	I0729 03:38:15.015817    5283 mustload.go:65] Loading cluster: ha-644000
	I0729 03:38:15.015858    5283 notify.go:220] Checking for updates...
	I0729 03:38:15.016120    5283 config.go:182] Loaded profile config "ha-644000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0729 03:38:15.016134    5283 status.go:255] checking status of ha-644000 ...
	I0729 03:38:15.016533    5283 cli_runner.go:164] Run: docker container inspect ha-644000 --format={{.State.Status}}
	I0729 03:38:15.034854    5283 status.go:330] ha-644000 host status = "Stopped" (err=<nil>)
	I0729 03:38:15.034877    5283 status.go:343] host is not running, skipping remaining checks
	I0729 03:38:15.034886    5283 status.go:257] ha-644000 status: &{Name:ha-644000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 03:38:15.034915    5283 status.go:255] checking status of ha-644000-m02 ...
	I0729 03:38:15.035166    5283 cli_runner.go:164] Run: docker container inspect ha-644000-m02 --format={{.State.Status}}
	I0729 03:38:15.053060    5283 status.go:330] ha-644000-m02 host status = "Stopped" (err=<nil>)
	I0729 03:38:15.053097    5283 status.go:343] host is not running, skipping remaining checks
	I0729 03:38:15.053106    5283 status.go:257] ha-644000-m02 status: &{Name:ha-644000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 03:38:15.053127    5283 status.go:255] checking status of ha-644000-m04 ...
	I0729 03:38:15.053405    5283 cli_runner.go:164] Run: docker container inspect ha-644000-m04 --format={{.State.Status}}
	I0729 03:38:15.071012    5283 status.go:330] ha-644000-m04 host status = "Stopped" (err=<nil>)
	I0729 03:38:15.071035    5283 status.go:343] host is not running, skipping remaining checks
	I0729 03:38:15.071042    5283 status.go:257] ha-644000-m04 status: &{Name:ha-644000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-644000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0729 03:38:53.683570    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
E0729 03:39:14.234513    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-644000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m19.160270272s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-644000 --control-plane -v=7 --alsologtostderr
E0729 03:39:41.917780    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-644000 --control-plane -v=7 --alsologtostderr: (36.932508421s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-644000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-792000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-792000 --driver=docker : (21.728229143s)
--- PASS: TestImageBuild/serial/Setup (21.73s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-792000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-792000: (1.660375938s)
--- PASS: TestImageBuild/serial/NormalBuild (1.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-792000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-792000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-792000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.34s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-931000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-931000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (41.340790041s)
--- PASS: TestJSONOutput/start/Command (41.34s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-931000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-931000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-931000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-931000 --output=json --user=testUser: (5.719503181s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-393000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-393000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (358.610918ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"70889c72-8ae8-4425-9a28-5f34a5365780","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-393000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"249546be-68fa-4304-815e-a4c3b419407e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19337"}}
	{"specversion":"1.0","id":"28d2cfcd-ab1c-4e15-be07-253cf1e4c1da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig"}}
	{"specversion":"1.0","id":"a3936c89-760f-49d3-9d0d-45d5f65c9b31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"18eaae6e-cec0-49fa-a770-85296e9ce8fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"de91e2a1-42fb-4af2-98f7-a7ea90edf23f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19337-1372/.minikube"}}
	{"specversion":"1.0","id":"7e9e9f3d-856f-45b6-87c4-c0c288bb3001","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"503ba281-4c4b-4487-a2eb-91fa28600676","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-393000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-393000
--- PASS: TestErrorJSONOutput (0.57s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-813000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-813000 --network=: (20.130014618s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-813000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-813000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-813000: (1.932085503s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-115000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-115000 --network=bridge: (19.759816475s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-115000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-115000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-115000: (1.848870072s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.63s)

                                                
                                    
x
+
TestKicExistingNetwork (22.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-924000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-924000 --network=existing-network: (20.581686979s)
helpers_test.go:175: Cleaning up "existing-network-924000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-924000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-924000: (1.844715091s)
--- PASS: TestKicExistingNetwork (22.60s)

                                                
                                    
x
+
TestKicCustomSubnet (22.47s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-308000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-308000 --subnet=192.168.60.0/24: (20.510370789s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-308000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-308000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-308000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-308000: (1.944090402s)
--- PASS: TestKicCustomSubnet (22.47s)

                                                
                                    
x
+
TestKicStaticIP (21.64s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-905000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-905000 --static-ip=192.168.200.200: (19.529381886s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-905000 ip
helpers_test.go:175: Cleaning up "static-ip-905000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-905000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-905000: (1.942708949s)
--- PASS: TestKicStaticIP (21.64s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (47.61s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-614000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-614000 --driver=docker : (21.212180949s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-616000 --driver=docker 
E0729 03:43:53.658302    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/addons-257000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-616000 --driver=docker : (21.429365757s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-614000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-616000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-616000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-616000
E0729 03:44:14.209228    1909 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19337-1372/.minikube/profiles/functional-818000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-616000: (1.812795568s)
helpers_test.go:175: Cleaning up "first-614000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-614000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-614000: (1.984517769s)
--- PASS: TestMinikubeProfile (47.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-037000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-037000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (5.970097003s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-037000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-051000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-051000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.744127329s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.74s)

                                                
                                    
x
+
TestPreload (90.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-003000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-003000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (57.212871775s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-003000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-003000 image pull gcr.io/k8s-minikube/busybox: (1.34333225s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-003000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-003000: (10.786527613s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-003000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-003000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (18.621827669s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-003000 image list
helpers_test.go:175: Cleaning up "test-preload-003000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-003000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-003000: (2.051663698s)
--- PASS: TestPreload (90.26s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (9.12s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19337
- KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2614682605/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2614682605/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2614682605/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2614682605/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (9.12s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.55s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19337
- KUBECONFIG=/Users/jenkins/minikube-integration/19337-1372/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3771454575/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3771454575/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3771454575/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3771454575/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.55s)

                                                
                                    

Test skip (19/212)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.519249ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-hmqvj" [a6af156f-3662-4c68-93bd-095da0b5cb33] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004760385s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-spm66" [c4ed43b7-dd33-4adf-9f56-97cbe4295189] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004685258s
addons_test.go:342: (dbg) Run:  kubectl --context addons-257000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-257000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-257000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.641005556s)
addons_test.go:357: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (13.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-257000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-257000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-257000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e426d413-129a-4892-8ec0-57275699e951] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e426d413-129a-4892-8ec0-57275699e951] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 19.006486925s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-257000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (19.65s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-818000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-818000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-rvcrm" [66d82507-7b70-4fa2-bbde-c06b87c0c17b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-rvcrm" [66d82507-7b70-4fa2-bbde-c06b87c0c17b] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.006709896s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard