Test Report: Docker_macOS 18259

                    
                      540f885a6d6e66248f116de2dd0a4210cbfa2dfa:2024-02-29:33352
                    
                

Test fail (23/195)

x
+
TestOffline (754.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-158000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-158000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m33.354265939s)

                                                
                                                
-- stdout --
	* [offline-docker-158000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node offline-docker-158000 in cluster offline-docker-158000
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-158000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 11:08:51.200120    9021 out.go:291] Setting OutFile to fd 1 ...
	I0229 11:08:51.200366    9021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 11:08:51.200371    9021 out.go:304] Setting ErrFile to fd 2...
	I0229 11:08:51.200375    9021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 11:08:51.200546    9021 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 11:08:51.201938    9021 out.go:298] Setting JSON to false
	I0229 11:08:51.224053    9021 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5901,"bootTime":1709227830,"procs":497,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0229 11:08:51.224143    9021 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 11:08:51.246488    9021 out.go:177] * [offline-docker-158000] minikube v1.32.0 on Darwin 14.3.1
	I0229 11:08:51.290059    9021 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 11:08:51.290186    9021 notify.go:220] Checking for updates...
	I0229 11:08:51.311179    9021 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	I0229 11:08:51.333271    9021 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0229 11:08:51.355100    9021 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 11:08:51.376930    9021 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	I0229 11:08:51.398079    9021 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 11:08:51.419509    9021 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 11:08:51.475017    9021 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 11:08:51.475212    9021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 11:08:51.576975    9021 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:false NGoroutines:195 SystemTime:2024-02-29 19:08:51.567571994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 11:08:51.620430    9021 out.go:177] * Using the docker driver based on user configuration
	I0229 11:08:51.641438    9021 start.go:299] selected driver: docker
	I0229 11:08:51.641487    9021 start.go:903] validating driver "docker" against <nil>
	I0229 11:08:51.641503    9021 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 11:08:51.645764    9021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 11:08:51.743480    9021 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:false NGoroutines:195 SystemTime:2024-02-29 19:08:51.734005768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 11:08:51.743647    9021 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 11:08:51.743844    9021 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 11:08:51.764725    9021 out.go:177] * Using Docker Desktop driver with root privileges
	I0229 11:08:51.786748    9021 cni.go:84] Creating CNI manager for ""
	I0229 11:08:51.786788    9021 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 11:08:51.786803    9021 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 11:08:51.786831    9021 start_flags.go:323] config:
	{Name:offline-docker-158000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-158000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 11:08:51.808580    9021 out.go:177] * Starting control plane node offline-docker-158000 in cluster offline-docker-158000
	I0229 11:08:51.850805    9021 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 11:08:51.871829    9021 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 11:08:51.913557    9021 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 11:08:51.913590    9021 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 11:08:51.913627    9021 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 11:08:51.913642    9021 cache.go:56] Caching tarball of preloaded images
	I0229 11:08:51.913804    9021 preload.go:174] Found /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 11:08:51.913821    9021 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 11:08:51.914930    9021 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/offline-docker-158000/config.json ...
	I0229 11:08:51.915026    9021 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/offline-docker-158000/config.json: {Name:mk78600c558f75f5976317b20b749da276f751db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 11:08:51.963707    9021 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 11:08:51.963770    9021 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 11:08:51.963789    9021 cache.go:194] Successfully downloaded all kic artifacts
	I0229 11:08:51.963861    9021 start.go:365] acquiring machines lock for offline-docker-158000: {Name:mk8c342834689ae2d1b377c1888c5b8336679323 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 11:08:51.964010    9021 start.go:369] acquired machines lock for "offline-docker-158000" in 135.783µs
	I0229 11:08:51.964034    9021 start.go:93] Provisioning new machine with config: &{Name:offline-docker-158000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-158000 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 11:08:51.964105    9021 start.go:125] createHost starting for "" (driver="docker")
	I0229 11:08:52.006439    9021 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0229 11:08:52.006700    9021 start.go:159] libmachine.API.Create for "offline-docker-158000" (driver="docker")
	I0229 11:08:52.006741    9021 client.go:168] LocalClient.Create starting
	I0229 11:08:52.006896    9021 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem
	I0229 11:08:52.006982    9021 main.go:141] libmachine: Decoding PEM data...
	I0229 11:08:52.006999    9021 main.go:141] libmachine: Parsing certificate...
	I0229 11:08:52.007080    9021 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/cert.pem
	I0229 11:08:52.007120    9021 main.go:141] libmachine: Decoding PEM data...
	I0229 11:08:52.007127    9021 main.go:141] libmachine: Parsing certificate...
	I0229 11:08:52.007636    9021 cli_runner.go:164] Run: docker network inspect offline-docker-158000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 11:08:52.057248    9021 cli_runner.go:211] docker network inspect offline-docker-158000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 11:08:52.057346    9021 network_create.go:281] running [docker network inspect offline-docker-158000] to gather additional debugging logs...
	I0229 11:08:52.057363    9021 cli_runner.go:164] Run: docker network inspect offline-docker-158000
	W0229 11:08:52.106630    9021 cli_runner.go:211] docker network inspect offline-docker-158000 returned with exit code 1
	I0229 11:08:52.106664    9021 network_create.go:284] error running [docker network inspect offline-docker-158000]: docker network inspect offline-docker-158000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-158000 not found
	I0229 11:08:52.106676    9021 network_create.go:286] output of [docker network inspect offline-docker-158000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-158000 not found
	
	** /stderr **
	I0229 11:08:52.106797    9021 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 11:08:52.157657    9021 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 11:08:52.158058    9021 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002312e70}
	I0229 11:08:52.158074    9021 network_create.go:124] attempt to create docker network offline-docker-158000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0229 11:08:52.158155    9021 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-158000 offline-docker-158000
	I0229 11:08:52.243291    9021 network_create.go:108] docker network offline-docker-158000 192.168.58.0/24 created
	I0229 11:08:52.243335    9021 kic.go:121] calculated static IP "192.168.58.2" for the "offline-docker-158000" container
	I0229 11:08:52.243448    9021 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 11:08:52.294123    9021 cli_runner.go:164] Run: docker volume create offline-docker-158000 --label name.minikube.sigs.k8s.io=offline-docker-158000 --label created_by.minikube.sigs.k8s.io=true
	I0229 11:08:52.344247    9021 oci.go:103] Successfully created a docker volume offline-docker-158000
	I0229 11:08:52.344351    9021 cli_runner.go:164] Run: docker run --rm --name offline-docker-158000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-158000 --entrypoint /usr/bin/test -v offline-docker-158000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 11:08:52.704505    9021 oci.go:107] Successfully prepared a docker volume offline-docker-158000
	I0229 11:08:52.704540    9021 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 11:08:52.704552    9021 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 11:08:52.704671    9021 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-158000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 11:14:51.995679    9021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 11:14:51.995813    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:14:52.046482    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:14:52.046609    9021 retry.go:31] will retry after 261.266233ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:14:52.308680    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:14:52.367111    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:14:52.367222    9021 retry.go:31] will retry after 503.54145ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:14:52.871201    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:14:52.920495    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:14:52.920609    9021 retry.go:31] will retry after 678.638757ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:14:53.601480    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:14:53.653099    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	W0229 11:14:53.653204    9021 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	
	W0229 11:14:53.653230    9021 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:14:53.653285    9021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 11:14:53.653340    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:14:53.702652    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:14:53.702750    9021 retry.go:31] will retry after 313.017095ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:14:54.016517    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:14:54.066774    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:14:54.066870    9021 retry.go:31] will retry after 463.413159ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:14:54.530582    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:14:54.581616    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:14:54.581708    9021 retry.go:31] will retry after 458.342107ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:14:55.040634    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:14:55.091451    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	W0229 11:14:55.091557    9021 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	
	W0229 11:14:55.091575    9021 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:14:55.091587    9021 start.go:128] duration metric: createHost completed in 6m3.139526963s
	I0229 11:14:55.091594    9021 start.go:83] releasing machines lock for "offline-docker-158000", held for 6m3.139632996s
	W0229 11:14:55.091609    9021 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I0229 11:14:55.092041    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:14:55.141670    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	I0229 11:14:55.141740    9021 delete.go:82] Unable to get host status for offline-docker-158000, assuming it has already been deleted: state: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	W0229 11:14:55.141831    9021 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0229 11:14:55.141840    9021 start.go:709] Will try again in 5 seconds ...
	I0229 11:15:00.142086    9021 start.go:365] acquiring machines lock for offline-docker-158000: {Name:mk8c342834689ae2d1b377c1888c5b8336679323 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 11:15:00.142776    9021 start.go:369] acquired machines lock for "offline-docker-158000" in 628.851µs
	I0229 11:15:00.142904    9021 start.go:96] Skipping create...Using existing machine configuration
	I0229 11:15:00.142917    9021 fix.go:54] fixHost starting: 
	I0229 11:15:00.143293    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:15:00.193494    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	I0229 11:15:00.193539    9021 fix.go:102] recreateIfNeeded on offline-docker-158000: state= err=unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:00.193558    9021 fix.go:107] machineExists: false. err=machine does not exist
	I0229 11:15:00.215222    9021 out.go:177] * docker "offline-docker-158000" container is missing, will recreate.
	I0229 11:15:00.258072    9021 delete.go:124] DEMOLISHING offline-docker-158000 ...
	I0229 11:15:00.258242    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:15:00.309199    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	W0229 11:15:00.309262    9021 stop.go:75] unable to get state: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:00.309284    9021 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:00.309663    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:15:00.359144    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	I0229 11:15:00.359211    9021 delete.go:82] Unable to get host status for offline-docker-158000, assuming it has already been deleted: state: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:00.359292    9021 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-158000
	W0229 11:15:00.408241    9021 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-158000 returned with exit code 1
	I0229 11:15:00.408285    9021 kic.go:371] could not find the container offline-docker-158000 to remove it. will try anyways
	I0229 11:15:00.408355    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:15:00.456731    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	W0229 11:15:00.456777    9021 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:00.456861    9021 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-158000 /bin/bash -c "sudo init 0"
	W0229 11:15:00.505926    9021 cli_runner.go:211] docker exec --privileged -t offline-docker-158000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0229 11:15:00.505959    9021 oci.go:650] error shutdown offline-docker-158000: docker exec --privileged -t offline-docker-158000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:01.506292    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:15:01.558285    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	I0229 11:15:01.558339    9021 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:01.558349    9021 oci.go:664] temporary error: container offline-docker-158000 status is  but expect it to be exited
	I0229 11:15:01.558377    9021 retry.go:31] will retry after 425.655537ms: couldn't verify container is exited. %v: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:01.986165    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:15:02.036399    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	I0229 11:15:02.036453    9021 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:02.036466    9021 oci.go:664] temporary error: container offline-docker-158000 status is  but expect it to be exited
	I0229 11:15:02.036490    9021 retry.go:31] will retry after 709.928297ms: couldn't verify container is exited. %v: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:02.747857    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:15:02.799348    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	I0229 11:15:02.799397    9021 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:02.799409    9021 oci.go:664] temporary error: container offline-docker-158000 status is  but expect it to be exited
	I0229 11:15:02.799429    9021 retry.go:31] will retry after 930.679299ms: couldn't verify container is exited. %v: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:03.730335    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:15:03.782770    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	I0229 11:15:03.782820    9021 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:03.782831    9021 oci.go:664] temporary error: container offline-docker-158000 status is  but expect it to be exited
	I0229 11:15:03.782850    9021 retry.go:31] will retry after 2.43634524s: couldn't verify container is exited. %v: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:06.220071    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:15:06.270576    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	I0229 11:15:06.270623    9021 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:06.270634    9021 oci.go:664] temporary error: container offline-docker-158000 status is  but expect it to be exited
	I0229 11:15:06.270658    9021 retry.go:31] will retry after 1.544774328s: couldn't verify container is exited. %v: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:07.816851    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:15:07.867387    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	I0229 11:15:07.867442    9021 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:07.867450    9021 oci.go:664] temporary error: container offline-docker-158000 status is  but expect it to be exited
	I0229 11:15:07.867476    9021 retry.go:31] will retry after 3.34373003s: couldn't verify container is exited. %v: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:11.212786    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:15:11.263628    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	I0229 11:15:11.263686    9021 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:11.263698    9021 oci.go:664] temporary error: container offline-docker-158000 status is  but expect it to be exited
	I0229 11:15:11.263721    9021 retry.go:31] will retry after 6.458647032s: couldn't verify container is exited. %v: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:17.722558    9021 cli_runner.go:164] Run: docker container inspect offline-docker-158000 --format={{.State.Status}}
	W0229 11:15:17.773111    9021 cli_runner.go:211] docker container inspect offline-docker-158000 --format={{.State.Status}} returned with exit code 1
	I0229 11:15:17.773157    9021 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:15:17.773167    9021 oci.go:664] temporary error: container offline-docker-158000 status is  but expect it to be exited
	I0229 11:15:17.773199    9021 oci.go:88] couldn't shut down offline-docker-158000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	 
	I0229 11:15:17.773271    9021 cli_runner.go:164] Run: docker rm -f -v offline-docker-158000
	I0229 11:15:17.823187    9021 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-158000
	W0229 11:15:17.872204    9021 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-158000 returned with exit code 1
	I0229 11:15:17.872309    9021 cli_runner.go:164] Run: docker network inspect offline-docker-158000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 11:15:17.922549    9021 cli_runner.go:164] Run: docker network rm offline-docker-158000
	I0229 11:15:18.022857    9021 fix.go:114] Sleeping 1 second for extra luck!
	I0229 11:15:19.024276    9021 start.go:125] createHost starting for "" (driver="docker")
	I0229 11:15:19.046310    9021 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0229 11:15:19.046544    9021 start.go:159] libmachine.API.Create for "offline-docker-158000" (driver="docker")
	I0229 11:15:19.046579    9021 client.go:168] LocalClient.Create starting
	I0229 11:15:19.046779    9021 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem
	I0229 11:15:19.046873    9021 main.go:141] libmachine: Decoding PEM data...
	I0229 11:15:19.046899    9021 main.go:141] libmachine: Parsing certificate...
	I0229 11:15:19.046985    9021 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/cert.pem
	I0229 11:15:19.047069    9021 main.go:141] libmachine: Decoding PEM data...
	I0229 11:15:19.047084    9021 main.go:141] libmachine: Parsing certificate...
	I0229 11:15:19.068597    9021 cli_runner.go:164] Run: docker network inspect offline-docker-158000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 11:15:19.119970    9021 cli_runner.go:211] docker network inspect offline-docker-158000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 11:15:19.120072    9021 network_create.go:281] running [docker network inspect offline-docker-158000] to gather additional debugging logs...
	I0229 11:15:19.120088    9021 cli_runner.go:164] Run: docker network inspect offline-docker-158000
	W0229 11:15:19.169999    9021 cli_runner.go:211] docker network inspect offline-docker-158000 returned with exit code 1
	I0229 11:15:19.170032    9021 network_create.go:284] error running [docker network inspect offline-docker-158000]: docker network inspect offline-docker-158000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-158000 not found
	I0229 11:15:19.170048    9021 network_create.go:286] output of [docker network inspect offline-docker-158000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-158000 not found
	
	** /stderr **
	I0229 11:15:19.170177    9021 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 11:15:19.221987    9021 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 11:15:19.223554    9021 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 11:15:19.223914    9021 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013b10}
	I0229 11:15:19.223930    9021 network_create.go:124] attempt to create docker network offline-docker-158000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0229 11:15:19.223998    9021 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-158000 offline-docker-158000
	W0229 11:15:19.273406    9021 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-158000 offline-docker-158000 returned with exit code 1
	W0229 11:15:19.273447    9021 network_create.go:149] failed to create docker network offline-docker-158000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-158000 offline-docker-158000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0229 11:15:19.273467    9021 network_create.go:116] failed to create docker network offline-docker-158000 192.168.67.0/24, will retry: subnet is taken
	I0229 11:15:19.275116    9021 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 11:15:19.275506    9021 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00231a9d0}
	I0229 11:15:19.275518    9021 network_create.go:124] attempt to create docker network offline-docker-158000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0229 11:15:19.275582    9021 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-158000 offline-docker-158000
	W0229 11:15:19.325268    9021 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-158000 offline-docker-158000 returned with exit code 1
	W0229 11:15:19.325304    9021 network_create.go:149] failed to create docker network offline-docker-158000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-158000 offline-docker-158000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0229 11:15:19.325323    9021 network_create.go:116] failed to create docker network offline-docker-158000 192.168.76.0/24, will retry: subnet is taken
	I0229 11:15:19.326703    9021 network.go:210] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 11:15:19.327146    9021 network.go:207] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00240e640}
	I0229 11:15:19.327162    9021 network_create.go:124] attempt to create docker network offline-docker-158000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I0229 11:15:19.327236    9021 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-158000 offline-docker-158000
	I0229 11:15:19.413054    9021 network_create.go:108] docker network offline-docker-158000 192.168.85.0/24 created
	I0229 11:15:19.413097    9021 kic.go:121] calculated static IP "192.168.85.2" for the "offline-docker-158000" container
	I0229 11:15:19.413202    9021 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 11:15:19.464788    9021 cli_runner.go:164] Run: docker volume create offline-docker-158000 --label name.minikube.sigs.k8s.io=offline-docker-158000 --label created_by.minikube.sigs.k8s.io=true
	I0229 11:15:19.513756    9021 oci.go:103] Successfully created a docker volume offline-docker-158000
	I0229 11:15:19.513867    9021 cli_runner.go:164] Run: docker run --rm --name offline-docker-158000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-158000 --entrypoint /usr/bin/test -v offline-docker-158000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 11:15:19.790288    9021 oci.go:107] Successfully prepared a docker volume offline-docker-158000
	I0229 11:15:19.790326    9021 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 11:15:19.790339    9021 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 11:15:19.790459    9021 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-158000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 11:21:19.035015    9021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 11:21:19.035087    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:19.083921    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:21:19.084038    9021 retry.go:31] will retry after 219.016636ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:19.303588    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:19.355410    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:21:19.355511    9021 retry.go:31] will retry after 419.2579ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:19.775559    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:19.825965    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:21:19.826073    9021 retry.go:31] will retry after 536.937411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:20.364079    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:20.414368    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	W0229 11:21:20.414475    9021 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	
	W0229 11:21:20.414497    9021 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:20.414551    9021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 11:21:20.414606    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:20.463694    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:21:20.463804    9021 retry.go:31] will retry after 226.701455ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:20.692093    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:20.741807    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:21:20.741909    9021 retry.go:31] will retry after 327.530219ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:21.070089    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:21.120598    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:21:21.120698    9021 retry.go:31] will retry after 574.834933ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:21.696254    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:21.747243    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	W0229 11:21:21.747351    9021 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	
	W0229 11:21:21.747390    9021 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:21.747405    9021 start.go:128] duration metric: createHost completed in 6m2.735154236s
	I0229 11:21:21.747494    9021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 11:21:21.747557    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:21.796853    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:21:21.796945    9021 retry.go:31] will retry after 252.518913ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:22.051409    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:22.104504    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:21:22.104595    9021 retry.go:31] will retry after 298.573848ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:22.405523    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:22.456640    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:21:22.456741    9021 retry.go:31] will retry after 630.542274ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:23.087817    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:23.137976    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	W0229 11:21:23.138073    9021 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	
	W0229 11:21:23.138090    9021 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:23.138162    9021 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 11:21:23.138220    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:23.186930    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:21:23.187024    9021 retry.go:31] will retry after 201.626365ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:23.389476    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:23.441346    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:21:23.441447    9021 retry.go:31] will retry after 475.864758ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:23.919170    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:23.971561    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	I0229 11:21:23.971726    9021 retry.go:31] will retry after 312.736445ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:24.285226    9021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000
	W0229 11:21:24.335801    9021 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000 returned with exit code 1
	W0229 11:21:24.335905    9021 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	
	W0229 11:21:24.335925    9021 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-158000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-158000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000
	I0229 11:21:24.335932    9021 fix.go:56] fixHost completed within 6m24.205772264s
	I0229 11:21:24.335937    9021 start.go:83] releasing machines lock for "offline-docker-158000", held for 6m24.20582082s
	W0229 11:21:24.336028    9021 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-158000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-158000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0229 11:21:24.357763    9021 out.go:177] 
	W0229 11:21:24.379539    9021 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0229 11:21:24.379586    9021 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0229 11:21:24.379615    9021 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0229 11:21:24.422488    9021 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-158000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:626: *** TestOffline FAILED at 2024-02-29 11:21:24.478915 -0800 PST m=+6268.757601077
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-158000
helpers_test.go:235: (dbg) docker inspect offline-docker-158000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-158000",
	        "Id": "bb7357dc9d81a522e331461f22a730ed4d4acb336fdf876c7bd87ca1c14d5a41",
	        "Created": "2024-02-29T19:15:19.374155448Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-158000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-158000 -n offline-docker-158000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-158000 -n offline-docker-158000: exit status 7 (112.598292ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 11:21:24.642898    9403 status.go:249] status error: host: state: unknown state "offline-docker-158000": docker container inspect offline-docker-158000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-158000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-158000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-158000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-158000
--- FAIL: TestOffline (754.26s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (277.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-580000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0229 09:54:19.000029    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:54:46.685911    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:54:50.621485    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:54:50.627180    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:54:50.637947    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:54:50.660218    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:54:50.700459    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:54:50.780722    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:54:50.942372    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:54:51.263123    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:54:51.905364    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:54:53.185540    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:54:55.746114    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:55:00.867461    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:55:11.109538    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:55:31.590871    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 09:56:12.550443    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-580000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m37.40753334s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-580000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-580000 in cluster ingress-addon-legacy-580000
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 09:52:18.321196    4284 out.go:291] Setting OutFile to fd 1 ...
	I0229 09:52:18.321971    4284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:52:18.321980    4284 out.go:304] Setting ErrFile to fd 2...
	I0229 09:52:18.321985    4284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:52:18.322553    4284 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 09:52:18.324292    4284 out.go:298] Setting JSON to false
	I0229 09:52:18.346898    4284 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1308,"bootTime":1709227830,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0229 09:52:18.346995    4284 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 09:52:18.368946    4284 out.go:177] * [ingress-addon-legacy-580000] minikube v1.32.0 on Darwin 14.3.1
	I0229 09:52:18.411552    4284 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 09:52:18.411620    4284 notify.go:220] Checking for updates...
	I0229 09:52:18.454446    4284 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	I0229 09:52:18.497421    4284 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0229 09:52:18.518475    4284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 09:52:18.540352    4284 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	I0229 09:52:18.561474    4284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 09:52:18.582689    4284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 09:52:18.638449    4284 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 09:52:18.638614    4284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 09:52:18.744865    4284 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:108 SystemTime:2024-02-29 17:52:18.734964399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 09:52:18.787176    4284 out.go:177] * Using the docker driver based on user configuration
	I0229 09:52:18.810043    4284 start.go:299] selected driver: docker
	I0229 09:52:18.810072    4284 start.go:903] validating driver "docker" against <nil>
	I0229 09:52:18.810086    4284 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 09:52:18.814529    4284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 09:52:18.918657    4284 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:108 SystemTime:2024-02-29 17:52:18.908892533 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 09:52:18.918844    4284 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 09:52:18.919020    4284 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 09:52:18.940164    4284 out.go:177] * Using Docker Desktop driver with root privileges
	I0229 09:52:18.962931    4284 cni.go:84] Creating CNI manager for ""
	I0229 09:52:18.962973    4284 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 09:52:18.962995    4284 start_flags.go:323] config:
	{Name:ingress-addon-legacy-580000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-580000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 09:52:18.985034    4284 out.go:177] * Starting control plane node ingress-addon-legacy-580000 in cluster ingress-addon-legacy-580000
	I0229 09:52:19.026912    4284 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 09:52:19.048194    4284 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 09:52:19.089989    4284 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 09:52:19.090047    4284 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 09:52:19.141107    4284 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 09:52:19.141134    4284 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 09:52:19.365234    4284 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0229 09:52:19.365273    4284 cache.go:56] Caching tarball of preloaded images
	I0229 09:52:19.365717    4284 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 09:52:19.388128    4284 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0229 09:52:19.429884    4284 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 09:52:20.283291    4284 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0229 09:52:37.533355    4284 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 09:52:37.533540    4284 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 09:52:38.122765    4284 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0229 09:52:38.123108    4284 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/config.json ...
	I0229 09:52:38.123134    4284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/config.json: {Name:mkb698ace82b02e0cbd37f0f3d203dc7e7239d6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 09:52:38.123457    4284 cache.go:194] Successfully downloaded all kic artifacts
	I0229 09:52:38.123494    4284 start.go:365] acquiring machines lock for ingress-addon-legacy-580000: {Name:mkc47562a5cd3ce238135af49ecddd700caa3fa3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 09:52:38.123583    4284 start.go:369] acquired machines lock for "ingress-addon-legacy-580000" in 81.645µs
	I0229 09:52:38.123604    4284 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-580000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-580000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 09:52:38.123647    4284 start.go:125] createHost starting for "" (driver="docker")
	I0229 09:52:38.145015    4284 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0229 09:52:38.145244    4284 start.go:159] libmachine.API.Create for "ingress-addon-legacy-580000" (driver="docker")
	I0229 09:52:38.145272    4284 client.go:168] LocalClient.Create starting
	I0229 09:52:38.145388    4284 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem
	I0229 09:52:38.145443    4284 main.go:141] libmachine: Decoding PEM data...
	I0229 09:52:38.145459    4284 main.go:141] libmachine: Parsing certificate...
	I0229 09:52:38.145515    4284 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/cert.pem
	I0229 09:52:38.145555    4284 main.go:141] libmachine: Decoding PEM data...
	I0229 09:52:38.145562    4284 main.go:141] libmachine: Parsing certificate...
	I0229 09:52:38.166587    4284 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-580000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 09:52:38.220101    4284 cli_runner.go:211] docker network inspect ingress-addon-legacy-580000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 09:52:38.220221    4284 network_create.go:281] running [docker network inspect ingress-addon-legacy-580000] to gather additional debugging logs...
	I0229 09:52:38.220239    4284 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-580000
	W0229 09:52:38.270439    4284 cli_runner.go:211] docker network inspect ingress-addon-legacy-580000 returned with exit code 1
	I0229 09:52:38.270484    4284 network_create.go:284] error running [docker network inspect ingress-addon-legacy-580000]: docker network inspect ingress-addon-legacy-580000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-580000 not found
	I0229 09:52:38.270500    4284 network_create.go:286] output of [docker network inspect ingress-addon-legacy-580000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-580000 not found
	
	** /stderr **
	I0229 09:52:38.270650    4284 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 09:52:38.322076    4284 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021b1050}
	I0229 09:52:38.343078    4284 network_create.go:124] attempt to create docker network ingress-addon-legacy-580000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0229 09:52:38.343265    4284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-580000 ingress-addon-legacy-580000
	I0229 09:52:38.430221    4284 network_create.go:108] docker network ingress-addon-legacy-580000 192.168.49.0/24 created
	I0229 09:52:38.430265    4284 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-580000" container
	I0229 09:52:38.430396    4284 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 09:52:38.481103    4284 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-580000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-580000 --label created_by.minikube.sigs.k8s.io=true
	I0229 09:52:38.534908    4284 oci.go:103] Successfully created a docker volume ingress-addon-legacy-580000
	I0229 09:52:38.535031    4284 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-580000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-580000 --entrypoint /usr/bin/test -v ingress-addon-legacy-580000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 09:52:38.952753    4284 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-580000
	I0229 09:52:38.952793    4284 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 09:52:38.952806    4284 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 09:52:38.952913    4284 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-580000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 09:52:41.382306    4284 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-580000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (2.429359552s)
	I0229 09:52:41.382337    4284 kic.go:203] duration metric: took 2.429563 seconds to extract preloaded images to volume
	I0229 09:52:41.382465    4284 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0229 09:52:41.488118    4284 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-580000 --name ingress-addon-legacy-580000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-580000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-580000 --network ingress-addon-legacy-580000 --ip 192.168.49.2 --volume ingress-addon-legacy-580000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0229 09:52:41.766132    4284 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-580000 --format={{.State.Running}}
	I0229 09:52:41.821307    4284 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-580000 --format={{.State.Status}}
	I0229 09:52:41.880128    4284 cli_runner.go:164] Run: docker exec ingress-addon-legacy-580000 stat /var/lib/dpkg/alternatives/iptables
	I0229 09:52:41.990596    4284 oci.go:144] the created container "ingress-addon-legacy-580000" has a running status.
	I0229 09:52:41.990648    4284 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18259-932/.minikube/machines/ingress-addon-legacy-580000/id_rsa...
	I0229 09:52:42.091938    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/machines/ingress-addon-legacy-580000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0229 09:52:42.092009    4284 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18259-932/.minikube/machines/ingress-addon-legacy-580000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0229 09:52:42.180266    4284 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-580000 --format={{.State.Status}}
	I0229 09:52:42.237802    4284 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0229 09:52:42.237829    4284 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-580000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0229 09:52:42.396150    4284 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-580000 --format={{.State.Status}}
	I0229 09:52:42.450228    4284 machine.go:88] provisioning docker machine ...
	I0229 09:52:42.450274    4284 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-580000"
	I0229 09:52:42.450395    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:52:42.503024    4284 main.go:141] libmachine: Using SSH client type: native
	I0229 09:52:42.503240    4284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf830920] 0xf833680 <nil>  [] 0s} 127.0.0.1 50583 <nil> <nil>}
	I0229 09:52:42.503255    4284 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-580000 && echo "ingress-addon-legacy-580000" | sudo tee /etc/hostname
	I0229 09:52:42.645999    4284 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-580000
	
	I0229 09:52:42.646091    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:52:42.697926    4284 main.go:141] libmachine: Using SSH client type: native
	I0229 09:52:42.698101    4284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf830920] 0xf833680 <nil>  [] 0s} 127.0.0.1 50583 <nil> <nil>}
	I0229 09:52:42.698117    4284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-580000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-580000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-580000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 09:52:42.817487    4284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 09:52:42.817510    4284 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18259-932/.minikube CaCertPath:/Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18259-932/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18259-932/.minikube}
	I0229 09:52:42.817529    4284 ubuntu.go:177] setting up certificates
	I0229 09:52:42.817539    4284 provision.go:83] configureAuth start
	I0229 09:52:42.817615    4284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-580000
	I0229 09:52:42.869323    4284 provision.go:138] copyHostCerts
	I0229 09:52:42.869368    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18259-932/.minikube/ca.pem
	I0229 09:52:42.869421    4284 exec_runner.go:144] found /Users/jenkins/minikube-integration/18259-932/.minikube/ca.pem, removing ...
	I0229 09:52:42.869439    4284 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18259-932/.minikube/ca.pem
	I0229 09:52:42.869563    4284 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18259-932/.minikube/ca.pem (1078 bytes)
	I0229 09:52:42.869763    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18259-932/.minikube/cert.pem
	I0229 09:52:42.869791    4284 exec_runner.go:144] found /Users/jenkins/minikube-integration/18259-932/.minikube/cert.pem, removing ...
	I0229 09:52:42.869796    4284 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18259-932/.minikube/cert.pem
	I0229 09:52:42.869906    4284 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18259-932/.minikube/cert.pem (1123 bytes)
	I0229 09:52:42.870078    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18259-932/.minikube/key.pem
	I0229 09:52:42.870117    4284 exec_runner.go:144] found /Users/jenkins/minikube-integration/18259-932/.minikube/key.pem, removing ...
	I0229 09:52:42.870122    4284 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18259-932/.minikube/key.pem
	I0229 09:52:42.870199    4284 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18259-932/.minikube/key.pem (1679 bytes)
	I0229 09:52:42.870352    4284 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18259-932/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-580000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-580000]
	I0229 09:52:42.990657    4284 provision.go:172] copyRemoteCerts
	I0229 09:52:42.990711    4284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 09:52:42.990769    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:52:43.043906    4284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50583 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/ingress-addon-legacy-580000/id_rsa Username:docker}
	I0229 09:52:43.138107    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 09:52:43.138179    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 09:52:43.178036    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 09:52:43.178203    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0229 09:52:43.218061    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 09:52:43.218191    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 09:52:43.257283    4284 provision.go:86] duration metric: configureAuth took 439.734658ms
	I0229 09:52:43.257299    4284 ubuntu.go:193] setting minikube options for container-runtime
	I0229 09:52:43.257507    4284 config.go:182] Loaded profile config "ingress-addon-legacy-580000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 09:52:43.257621    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:52:43.310516    4284 main.go:141] libmachine: Using SSH client type: native
	I0229 09:52:43.310726    4284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf830920] 0xf833680 <nil>  [] 0s} 127.0.0.1 50583 <nil> <nil>}
	I0229 09:52:43.310742    4284 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 09:52:43.430263    4284 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0229 09:52:43.430278    4284 ubuntu.go:71] root file system type: overlay
	I0229 09:52:43.430370    4284 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 09:52:43.430448    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:52:43.482064    4284 main.go:141] libmachine: Using SSH client type: native
	I0229 09:52:43.482243    4284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf830920] 0xf833680 <nil>  [] 0s} 127.0.0.1 50583 <nil> <nil>}
	I0229 09:52:43.482291    4284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 09:52:43.623360    4284 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 09:52:43.623466    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:52:43.676067    4284 main.go:141] libmachine: Using SSH client type: native
	I0229 09:52:43.676259    4284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf830920] 0xf833680 <nil>  [] 0s} 127.0.0.1 50583 <nil> <nil>}
	I0229 09:52:43.676274    4284 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 09:52:44.299803    4284 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-02-06 21:12:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-29 17:52:43.618924679 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0229 09:52:44.299828    4284 machine.go:91] provisioned docker machine in 1.849601782s
	I0229 09:52:44.299843    4284 client.go:171] LocalClient.Create took 6.154644733s
	I0229 09:52:44.299863    4284 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-580000" took 6.154705718s
	I0229 09:52:44.299873    4284 start.go:300] post-start starting for "ingress-addon-legacy-580000" (driver="docker")
	I0229 09:52:44.299880    4284 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 09:52:44.300000    4284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 09:52:44.300052    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:52:44.353425    4284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50583 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/ingress-addon-legacy-580000/id_rsa Username:docker}
	I0229 09:52:44.448181    4284 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 09:52:44.452208    4284 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0229 09:52:44.452233    4284 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0229 09:52:44.452240    4284 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0229 09:52:44.452245    4284 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0229 09:52:44.452255    4284 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18259-932/.minikube/addons for local assets ...
	I0229 09:52:44.452349    4284 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18259-932/.minikube/files for local assets ...
	I0229 09:52:44.452557    4284 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18259-932/.minikube/files/etc/ssl/certs/14082.pem -> 14082.pem in /etc/ssl/certs
	I0229 09:52:44.452563    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/files/etc/ssl/certs/14082.pem -> /etc/ssl/certs/14082.pem
	I0229 09:52:44.452829    4284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 09:52:44.467643    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/files/etc/ssl/certs/14082.pem --> /etc/ssl/certs/14082.pem (1708 bytes)
	I0229 09:52:44.507423    4284 start.go:303] post-start completed in 207.521873ms
	I0229 09:52:44.508176    4284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-580000
	I0229 09:52:44.560788    4284 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/config.json ...
	I0229 09:52:44.561246    4284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 09:52:44.561300    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:52:44.614265    4284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50583 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/ingress-addon-legacy-580000/id_rsa Username:docker}
	I0229 09:52:44.698556    4284 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 09:52:44.703530    4284 start.go:128] duration metric: createHost completed in 6.579963844s
	I0229 09:52:44.703547    4284 start.go:83] releasing machines lock for "ingress-addon-legacy-580000", held for 6.580048541s
	I0229 09:52:44.703634    4284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-580000
	I0229 09:52:44.755247    4284 ssh_runner.go:195] Run: cat /version.json
	I0229 09:52:44.755278    4284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 09:52:44.755323    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:52:44.755360    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:52:44.812409    4284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50583 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/ingress-addon-legacy-580000/id_rsa Username:docker}
	I0229 09:52:44.812409    4284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50583 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/ingress-addon-legacy-580000/id_rsa Username:docker}
	I0229 09:52:44.991098    4284 ssh_runner.go:195] Run: systemctl --version
	I0229 09:52:44.995920    4284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 09:52:45.000998    4284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0229 09:52:45.042081    4284 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0229 09:52:45.042183    4284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 09:52:45.070951    4284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 09:52:45.099623    4284 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 09:52:45.099638    4284 start.go:475] detecting cgroup driver to use...
	I0229 09:52:45.099655    4284 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 09:52:45.099808    4284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 09:52:45.128830    4284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0229 09:52:45.145070    4284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 09:52:45.161190    4284 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 09:52:45.161302    4284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 09:52:45.177605    4284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 09:52:45.193657    4284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 09:52:45.209640    4284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 09:52:45.226119    4284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 09:52:45.242453    4284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 09:52:45.259339    4284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 09:52:45.274164    4284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 09:52:45.288510    4284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 09:52:45.352585    4284 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 09:52:45.450147    4284 start.go:475] detecting cgroup driver to use...
	I0229 09:52:45.450167    4284 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0229 09:52:45.450264    4284 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 09:52:45.478918    4284 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0229 09:52:45.478993    4284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 09:52:45.498675    4284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 09:52:45.528673    4284 ssh_runner.go:195] Run: which cri-dockerd
	I0229 09:52:45.533316    4284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 09:52:45.550401    4284 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 09:52:45.580374    4284 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 09:52:45.676639    4284 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 09:52:45.745744    4284 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 09:52:45.745938    4284 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 09:52:45.775115    4284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 09:52:45.838595    4284 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 09:52:46.087335    4284 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 09:52:46.109384    4284 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 09:52:46.176078    4284 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
	I0229 09:52:46.176170    4284 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-580000 dig +short host.docker.internal
	I0229 09:52:46.290934    4284 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0229 09:52:46.291041    4284 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0229 09:52:46.295568    4284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 09:52:46.314567    4284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:52:46.367306    4284 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 09:52:46.367382    4284 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 09:52:46.384611    4284 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0229 09:52:46.384626    4284 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0229 09:52:46.384686    4284 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 09:52:46.399632    4284 ssh_runner.go:195] Run: which lz4
	I0229 09:52:46.404077    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 09:52:46.404199    4284 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 09:52:46.408244    4284 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 09:52:46.408265    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0229 09:52:53.250053    4284 docker.go:649] Took 6.846002 seconds to copy over tarball
	I0229 09:52:53.250188    4284 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 09:52:54.966128    4284 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.715927175s)
	I0229 09:52:54.966145    4284 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 09:52:55.021717    4284 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 09:52:55.038113    4284 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0229 09:52:55.066175    4284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 09:52:55.128518    4284 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 09:52:56.123260    4284 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 09:52:56.141728    4284 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0229 09:52:56.141741    4284 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0229 09:52:56.141756    4284 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 09:52:56.147974    4284 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0229 09:52:56.147982    4284 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 09:52:56.148006    4284 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0229 09:52:56.148167    4284 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 09:52:56.148273    4284 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0229 09:52:56.148719    4284 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 09:52:56.149287    4284 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 09:52:56.149568    4284 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 09:52:56.153031    4284 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0229 09:52:56.153133    4284 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0229 09:52:56.154135    4284 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0229 09:52:56.154700    4284 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 09:52:56.154929    4284 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 09:52:56.155258    4284 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 09:52:56.155360    4284 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 09:52:56.155357    4284 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 09:52:58.036267    4284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0229 09:52:58.054263    4284 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0229 09:52:58.054308    4284 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 09:52:58.054374    4284 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0229 09:52:58.072245    4284 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18259-932/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0229 09:52:58.126693    4284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0229 09:52:58.143545    4284 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0229 09:52:58.143593    4284 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0229 09:52:58.143657    4284 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0229 09:52:58.159800    4284 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18259-932/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0229 09:52:58.164372    4284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0229 09:52:58.169454    4284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0229 09:52:58.173294    4284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0229 09:52:58.182680    4284 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0229 09:52:58.182728    4284 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 09:52:58.182808    4284 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0229 09:52:58.186926    4284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 09:52:58.189546    4284 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0229 09:52:58.189583    4284 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0229 09:52:58.189653    4284 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0229 09:52:58.194737    4284 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0229 09:52:58.194774    4284 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 09:52:58.194853    4284 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0229 09:52:58.195370    4284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0229 09:52:58.205179    4284 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18259-932/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0229 09:52:58.210420    4284 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0229 09:52:58.210455    4284 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 09:52:58.210537    4284 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 09:52:58.213648    4284 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18259-932/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0229 09:52:58.220113    4284 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18259-932/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0229 09:52:58.220130    4284 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0229 09:52:58.220152    4284 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0229 09:52:58.220203    4284 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0229 09:52:58.280892    4284 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18259-932/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0229 09:52:58.286010    4284 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18259-932/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0229 09:52:59.021128    4284 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 09:52:59.039022    4284 cache_images.go:92] LoadImages completed in 2.89729383s
	W0229 09:52:59.039071    4284 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18259-932/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18259-932/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0229 09:52:59.039149    4284 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 09:52:59.086630    4284 cni.go:84] Creating CNI manager for ""
	I0229 09:52:59.086650    4284 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 09:52:59.086660    4284 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 09:52:59.086682    4284 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-580000 NodeName:ingress-addon-legacy-580000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 09:52:59.086771    4284 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-580000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 09:52:59.086821    4284 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-580000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-580000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 09:52:59.086887    4284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0229 09:52:59.101829    4284 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 09:52:59.101891    4284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 09:52:59.116561    4284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0229 09:52:59.144936    4284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0229 09:52:59.173329    4284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0229 09:52:59.201924    4284 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0229 09:52:59.206355    4284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 09:52:59.224321    4284 certs.go:56] Setting up /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000 for IP: 192.168.49.2
	I0229 09:52:59.224381    4284 certs.go:190] acquiring lock for shared ca certs: {Name:mkc9f82ec686f18428dac33e4e0986537b4ba8dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 09:52:59.224629    4284 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18259-932/.minikube/ca.key
	I0229 09:52:59.224715    4284 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18259-932/.minikube/proxy-client-ca.key
	I0229 09:52:59.224806    4284 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/client.key
	I0229 09:52:59.224846    4284 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/client.crt with IP's: []
	I0229 09:52:59.340563    4284 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/client.crt ...
	I0229 09:52:59.340575    4284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/client.crt: {Name:mk36382e26f18a1e5853f5edd7abc3f79a1d8f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 09:52:59.340903    4284 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/client.key ...
	I0229 09:52:59.340912    4284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/client.key: {Name:mk8627c4b09b2cd30f327c0b1bc2d642e6f2937e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 09:52:59.341144    4284 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.key.dd3b5fb2
	I0229 09:52:59.341159    4284 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 09:52:59.478875    4284 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.crt.dd3b5fb2 ...
	I0229 09:52:59.478888    4284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.crt.dd3b5fb2: {Name:mke46730a9b6d328be1071abfc94c25ecc23ddf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 09:52:59.479187    4284 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.key.dd3b5fb2 ...
	I0229 09:52:59.479197    4284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.key.dd3b5fb2: {Name:mk97fcdfb856f27477086a52d38e652dd1258d04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 09:52:59.479418    4284 certs.go:337] copying /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.crt
	I0229 09:52:59.479619    4284 certs.go:341] copying /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.key
	I0229 09:52:59.479787    4284 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/proxy-client.key
	I0229 09:52:59.479812    4284 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/proxy-client.crt with IP's: []
	I0229 09:52:59.675296    4284 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/proxy-client.crt ...
	I0229 09:52:59.675311    4284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/proxy-client.crt: {Name:mkb1364b74da002210639fbd93d534a54288c32a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 09:52:59.675624    4284 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/proxy-client.key ...
	I0229 09:52:59.675636    4284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/proxy-client.key: {Name:mk37d2f1edd31692ca4a60b3e8c0d5d3cdfcfed0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 09:52:59.675843    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 09:52:59.675873    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 09:52:59.675891    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 09:52:59.675907    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 09:52:59.675924    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 09:52:59.675940    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 09:52:59.675959    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 09:52:59.675974    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 09:52:59.676058    4284 certs.go:437] found cert: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/Users/jenkins/minikube-integration/18259-932/.minikube/certs/1408.pem (1338 bytes)
	W0229 09:52:59.676114    4284 certs.go:433] ignoring /Users/jenkins/minikube-integration/18259-932/.minikube/certs/Users/jenkins/minikube-integration/18259-932/.minikube/certs/1408_empty.pem, impossibly tiny 0 bytes
	I0229 09:52:59.676123    4284 certs.go:437] found cert: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 09:52:59.676159    4284 certs.go:437] found cert: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem (1078 bytes)
	I0229 09:52:59.676191    4284 certs.go:437] found cert: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/Users/jenkins/minikube-integration/18259-932/.minikube/certs/cert.pem (1123 bytes)
	I0229 09:52:59.676217    4284 certs.go:437] found cert: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/Users/jenkins/minikube-integration/18259-932/.minikube/certs/key.pem (1679 bytes)
	I0229 09:52:59.676285    4284 certs.go:437] found cert: /Users/jenkins/minikube-integration/18259-932/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18259-932/.minikube/files/etc/ssl/certs/14082.pem (1708 bytes)
	I0229 09:52:59.676316    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 09:52:59.676333    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/certs/1408.pem -> /usr/share/ca-certificates/1408.pem
	I0229 09:52:59.676348    4284 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18259-932/.minikube/files/etc/ssl/certs/14082.pem -> /usr/share/ca-certificates/14082.pem
	I0229 09:52:59.676795    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 09:52:59.718310    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 09:52:59.758487    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 09:52:59.799915    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/ingress-addon-legacy-580000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 09:52:59.841041    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 09:52:59.882287    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 09:52:59.922453    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 09:52:59.962961    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 09:53:00.004228    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 09:53:00.045982    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/certs/1408.pem --> /usr/share/ca-certificates/1408.pem (1338 bytes)
	I0229 09:53:00.086625    4284 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18259-932/.minikube/files/etc/ssl/certs/14082.pem --> /usr/share/ca-certificates/14082.pem (1708 bytes)
	I0229 09:53:00.127904    4284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 09:53:00.158432    4284 ssh_runner.go:195] Run: openssl version
	I0229 09:53:00.164341    4284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 09:53:00.179971    4284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 09:53:00.184234    4284 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:39 /usr/share/ca-certificates/minikubeCA.pem
	I0229 09:53:00.184285    4284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 09:53:00.190853    4284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 09:53:00.206980    4284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1408.pem && ln -fs /usr/share/ca-certificates/1408.pem /etc/ssl/certs/1408.pem"
	I0229 09:53:00.222913    4284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1408.pem
	I0229 09:53:00.227082    4284 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:47 /usr/share/ca-certificates/1408.pem
	I0229 09:53:00.227126    4284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1408.pem
	I0229 09:53:00.233728    4284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1408.pem /etc/ssl/certs/51391683.0"
	I0229 09:53:00.249033    4284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14082.pem && ln -fs /usr/share/ca-certificates/14082.pem /etc/ssl/certs/14082.pem"
	I0229 09:53:00.264826    4284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14082.pem
	I0229 09:53:00.269718    4284 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:47 /usr/share/ca-certificates/14082.pem
	I0229 09:53:00.269782    4284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14082.pem
	I0229 09:53:00.276438    4284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14082.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 09:53:00.292077    4284 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 09:53:00.296612    4284 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 09:53:00.296657    4284 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-580000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-580000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 09:53:00.296757    4284 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 09:53:00.313804    4284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 09:53:00.328786    4284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 09:53:00.343870    4284 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 09:53:00.343935    4284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 09:53:00.358290    4284 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 09:53:00.358328    4284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 09:53:00.419131    4284 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 09:53:00.419241    4284 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 09:53:00.691514    4284 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 09:53:00.691610    4284 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 09:53:00.691711    4284 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 09:53:00.849537    4284 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 09:53:00.850230    4284 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 09:53:00.850277    4284 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 09:53:00.924140    4284 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 09:53:00.946032    4284 out.go:204]   - Generating certificates and keys ...
	I0229 09:53:00.946114    4284 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 09:53:00.946180    4284 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 09:53:01.224632    4284 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 09:53:01.383990    4284 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 09:53:01.528270    4284 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 09:53:01.576626    4284 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 09:53:01.679419    4284 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 09:53:01.679635    4284 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-580000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0229 09:53:01.828401    4284 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 09:53:01.828624    4284 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-580000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0229 09:53:01.987581    4284 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 09:53:02.148010    4284 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 09:53:02.404069    4284 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 09:53:02.404163    4284 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 09:53:02.521784    4284 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 09:53:02.671056    4284 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 09:53:02.971237    4284 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 09:53:03.239898    4284 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 09:53:03.240501    4284 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 09:53:03.262271    4284 out.go:204]   - Booting up control plane ...
	I0229 09:53:03.262427    4284 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 09:53:03.262542    4284 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 09:53:03.262651    4284 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 09:53:03.262792    4284 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 09:53:03.263035    4284 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 09:53:43.250395    4284 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 09:53:43.250715    4284 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 09:53:43.250917    4284 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 09:53:48.252882    4284 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 09:53:48.253091    4284 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 09:53:58.255219    4284 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 09:53:58.255437    4284 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 09:54:18.256582    4284 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 09:54:18.256799    4284 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 09:54:58.258467    4284 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 09:54:58.258716    4284 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 09:54:58.258735    4284 kubeadm.go:322] 
	I0229 09:54:58.258782    4284 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 09:54:58.258838    4284 kubeadm.go:322] 		timed out waiting for the condition
	I0229 09:54:58.258845    4284 kubeadm.go:322] 
	I0229 09:54:58.258890    4284 kubeadm.go:322] 	This error is likely caused by:
	I0229 09:54:58.258929    4284 kubeadm.go:322] 		- The kubelet is not running
	I0229 09:54:58.259039    4284 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 09:54:58.259050    4284 kubeadm.go:322] 
	I0229 09:54:58.259155    4284 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 09:54:58.259189    4284 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 09:54:58.259222    4284 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 09:54:58.259230    4284 kubeadm.go:322] 
	I0229 09:54:58.259401    4284 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 09:54:58.259511    4284 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 09:54:58.259531    4284 kubeadm.go:322] 
	I0229 09:54:58.259628    4284 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0229 09:54:58.259690    4284 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0229 09:54:58.259766    4284 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 09:54:58.259826    4284 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0229 09:54:58.259871    4284 kubeadm.go:322] 
	I0229 09:54:58.263868    4284 kubeadm.go:322] W0229 17:53:00.418487    1761 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 09:54:58.264014    4284 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 09:54:58.264091    4284 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0229 09:54:58.264223    4284 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0229 09:54:58.264309    4284 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 09:54:58.264413    4284 kubeadm.go:322] W0229 17:53:03.245462    1761 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 09:54:58.264528    4284 kubeadm.go:322] W0229 17:53:03.246257    1761 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 09:54:58.264595    4284 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 09:54:58.264660    4284 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 09:54:58.264745    4284 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-580000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-580000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:53:00.418487    1761 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:53:03.245462    1761 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:53:03.246257    1761 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-580000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-580000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:53:00.418487    1761 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:53:03.245462    1761 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:53:03.246257    1761 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 09:54:58.264776    4284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 09:54:58.689223    4284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 09:54:58.706399    4284 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0229 09:54:58.706456    4284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 09:54:58.721389    4284 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 09:54:58.721413    4284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0229 09:54:58.777112    4284 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 09:54:58.777173    4284 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 09:54:59.012270    4284 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 09:54:59.012364    4284 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 09:54:59.012442    4284 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 09:54:59.167021    4284 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 09:54:59.167597    4284 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 09:54:59.167637    4284 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 09:54:59.230109    4284 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 09:54:59.251622    4284 out.go:204]   - Generating certificates and keys ...
	I0229 09:54:59.251692    4284 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 09:54:59.251792    4284 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 09:54:59.251876    4284 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 09:54:59.251977    4284 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 09:54:59.252103    4284 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 09:54:59.252156    4284 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 09:54:59.252291    4284 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 09:54:59.252404    4284 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 09:54:59.252501    4284 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 09:54:59.252595    4284 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 09:54:59.252656    4284 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 09:54:59.252713    4284 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 09:54:59.456060    4284 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 09:54:59.510324    4284 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 09:54:59.921660    4284 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 09:55:00.115652    4284 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 09:55:00.116357    4284 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 09:55:00.137958    4284 out.go:204]   - Booting up control plane ...
	I0229 09:55:00.138027    4284 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 09:55:00.138084    4284 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 09:55:00.138142    4284 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 09:55:00.138207    4284 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 09:55:00.138371    4284 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 09:55:40.125871    4284 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 09:55:40.126679    4284 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 09:55:40.126888    4284 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 09:55:45.128193    4284 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 09:55:45.128453    4284 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 09:55:55.129853    4284 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 09:55:55.130021    4284 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 09:56:15.132685    4284 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 09:56:15.132944    4284 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 09:56:55.135187    4284 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 09:56:55.135395    4284 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 09:56:55.135404    4284 kubeadm.go:322] 
	I0229 09:56:55.135455    4284 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 09:56:55.135502    4284 kubeadm.go:322] 		timed out waiting for the condition
	I0229 09:56:55.135523    4284 kubeadm.go:322] 
	I0229 09:56:55.135573    4284 kubeadm.go:322] 	This error is likely caused by:
	I0229 09:56:55.135611    4284 kubeadm.go:322] 		- The kubelet is not running
	I0229 09:56:55.135721    4284 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 09:56:55.135730    4284 kubeadm.go:322] 
	I0229 09:56:55.135855    4284 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 09:56:55.135896    4284 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 09:56:55.135931    4284 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 09:56:55.135939    4284 kubeadm.go:322] 
	I0229 09:56:55.136072    4284 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 09:56:55.136174    4284 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 09:56:55.136191    4284 kubeadm.go:322] 
	I0229 09:56:55.136310    4284 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0229 09:56:55.136365    4284 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0229 09:56:55.136454    4284 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 09:56:55.136506    4284 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0229 09:56:55.136522    4284 kubeadm.go:322] 
	I0229 09:56:55.140609    4284 kubeadm.go:322] W0229 17:54:58.776828    4755 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 09:56:55.140763    4284 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 09:56:55.140821    4284 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0229 09:56:55.140925    4284 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
	I0229 09:56:55.141011    4284 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 09:56:55.141113    4284 kubeadm.go:322] W0229 17:55:00.121008    4755 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 09:56:55.141213    4284 kubeadm.go:322] W0229 17:55:00.122456    4755 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 09:56:55.141281    4284 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 09:56:55.141353    4284 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 09:56:55.141384    4284 kubeadm.go:406] StartCluster complete in 3m54.848025085s
	I0229 09:56:55.141465    4284 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 09:56:55.157206    4284 logs.go:276] 0 containers: []
	W0229 09:56:55.157219    4284 logs.go:278] No container was found matching "kube-apiserver"
	I0229 09:56:55.157310    4284 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 09:56:55.173427    4284 logs.go:276] 0 containers: []
	W0229 09:56:55.173442    4284 logs.go:278] No container was found matching "etcd"
	I0229 09:56:55.173508    4284 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 09:56:55.190419    4284 logs.go:276] 0 containers: []
	W0229 09:56:55.190435    4284 logs.go:278] No container was found matching "coredns"
	I0229 09:56:55.190503    4284 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 09:56:55.207281    4284 logs.go:276] 0 containers: []
	W0229 09:56:55.207296    4284 logs.go:278] No container was found matching "kube-scheduler"
	I0229 09:56:55.207382    4284 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 09:56:55.224344    4284 logs.go:276] 0 containers: []
	W0229 09:56:55.224359    4284 logs.go:278] No container was found matching "kube-proxy"
	I0229 09:56:55.224423    4284 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 09:56:55.240205    4284 logs.go:276] 0 containers: []
	W0229 09:56:55.240224    4284 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 09:56:55.240309    4284 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 09:56:55.256351    4284 logs.go:276] 0 containers: []
	W0229 09:56:55.256366    4284 logs.go:278] No container was found matching "kindnet"
	I0229 09:56:55.256374    4284 logs.go:123] Gathering logs for kubelet ...
	I0229 09:56:55.256381    4284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 09:56:55.297606    4284 logs.go:123] Gathering logs for dmesg ...
	I0229 09:56:55.297620    4284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 09:56:55.317387    4284 logs.go:123] Gathering logs for describe nodes ...
	I0229 09:56:55.317402    4284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 09:56:55.374414    4284 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 09:56:55.374428    4284 logs.go:123] Gathering logs for Docker ...
	I0229 09:56:55.374441    4284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 09:56:55.395620    4284 logs.go:123] Gathering logs for container status ...
	I0229 09:56:55.395635    4284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 09:56:55.453488    4284 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:54:58.776828    4755 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:55:00.121008    4755 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:55:00.122456    4755 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 09:56:55.453511    4284 out.go:239] * 
	* 
	W0229 09:56:55.453551    4284 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:54:58.776828    4755 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:55:00.121008    4755 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:55:00.122456    4755 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:54:58.776828    4755 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:55:00.121008    4755 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:55:00.122456    4755 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 09:56:55.453564    4284 out.go:239] * 
	* 
	W0229 09:56:55.454159    4284 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 09:56:55.518091    4284 out.go:177] 
	W0229 09:56:55.559744    4284 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:54:58.776828    4755 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:55:00.121008    4755 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:55:00.122456    4755 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:54:58.776828    4755 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:55:00.121008    4755 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:55:00.122456    4755 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 09:56:55.559805    4284 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 09:56:55.559825    4284 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 09:56:55.601935    4284 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-580000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (277.45s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (119.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-580000 addons enable ingress --alsologtostderr -v=5
E0229 09:57:34.469760    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-580000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m59.112860401s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 09:56:55.768273    4458 out.go:291] Setting OutFile to fd 1 ...
	I0229 09:56:55.768587    4458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:56:55.768593    4458 out.go:304] Setting ErrFile to fd 2...
	I0229 09:56:55.768597    4458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:56:55.768785    4458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 09:56:55.769135    4458 mustload.go:65] Loading cluster: ingress-addon-legacy-580000
	I0229 09:56:55.769411    4458 config.go:182] Loaded profile config "ingress-addon-legacy-580000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 09:56:55.769431    4458 addons.go:597] checking whether the cluster is paused
	I0229 09:56:55.769514    4458 config.go:182] Loaded profile config "ingress-addon-legacy-580000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 09:56:55.769531    4458 host.go:66] Checking if "ingress-addon-legacy-580000" exists ...
	I0229 09:56:55.769919    4458 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-580000 --format={{.State.Status}}
	I0229 09:56:55.820006    4458 ssh_runner.go:195] Run: systemctl --version
	I0229 09:56:55.820092    4458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:56:55.869850    4458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50583 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/ingress-addon-legacy-580000/id_rsa Username:docker}
	I0229 09:56:55.952218    4458 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 09:56:55.990463    4458 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 09:56:56.011273    4458 config.go:182] Loaded profile config "ingress-addon-legacy-580000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 09:56:56.011283    4458 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-580000"
	I0229 09:56:56.011290    4458 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-580000"
	I0229 09:56:56.011323    4458 host.go:66] Checking if "ingress-addon-legacy-580000" exists ...
	I0229 09:56:56.011630    4458 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-580000 --format={{.State.Status}}
	I0229 09:56:56.082161    4458 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0229 09:56:56.103050    4458 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0229 09:56:56.124125    4458 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 09:56:56.145146    4458 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 09:56:56.166100    4458 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 09:56:56.166114    4458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0229 09:56:56.166193    4458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:56:56.215486    4458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50583 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/ingress-addon-legacy-580000/id_rsa Username:docker}
	I0229 09:56:56.323564    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:56:56.387292    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:56:56.387321    4458 retry.go:31] will retry after 275.794279ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:56:56.663604    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:56:56.721631    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:56:56.721649    4458 retry.go:31] will retry after 486.439395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:56:57.208478    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:56:57.266661    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:56:57.266679    4458 retry.go:31] will retry after 532.91665ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:56:57.800299    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:56:57.862712    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:56:57.862729    4458 retry.go:31] will retry after 1.046583537s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:56:58.911636    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:56:58.974949    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:56:58.974976    4458 retry.go:31] will retry after 843.822354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:56:59.820781    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:56:59.879858    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:56:59.879879    4458 retry.go:31] will retry after 1.111401255s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:57:00.991821    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:57:01.053815    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:57:01.053835    4458 retry.go:31] will retry after 2.248750723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:57:03.304883    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:57:03.366583    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:57:03.366600    4458 retry.go:31] will retry after 5.945761658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:57:09.312485    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:57:09.367071    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:57:09.367090    4458 retry.go:31] will retry after 3.614529584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:57:12.982653    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:57:13.041571    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:57:13.041588    4458 retry.go:31] will retry after 12.017095444s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:57:25.058883    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:57:25.113136    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:57:25.113153    4458 retry.go:31] will retry after 21.220985258s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:57:46.335498    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:57:46.393722    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:57:46.393742    4458 retry.go:31] will retry after 22.057466989s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:08.451532    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:58:08.507725    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:08.507744    4458 retry.go:31] will retry after 46.112210627s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:54.619432    4458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 09:58:54.679890    4458 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:54.679919    4458 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-580000"
	I0229 09:58:54.701658    4458 out.go:177] * Verifying ingress addon...
	I0229 09:58:54.745284    4458 out.go:177] 
	W0229 09:58:54.766570    4458 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-580000" does not exist: client config: context "ingress-addon-legacy-580000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-580000" does not exist: client config: context "ingress-addon-legacy-580000" does not exist]
	W0229 09:58:54.766610    4458 out.go:239] * 
	* 
	W0229 09:58:54.770178    4458 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 09:58:54.791539    4458 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-580000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-580000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125",
	        "Created": "2024-02-29T17:52:41.540717298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 50257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T17:52:41.758558303Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125/hostname",
	        "HostsPath": "/var/lib/docker/containers/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125/hosts",
	        "LogPath": "/var/lib/docker/containers/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125-json.log",
	        "Name": "/ingress-addon-legacy-580000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-580000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-580000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5ab44aa2297df4f970e3b122558b75d3b3af5410dce7ea1efa8bb6b3cbf5b143-init/diff:/var/lib/docker/overlay2/27fbcff5021de980a082cd343434b8923388c3122a97247e81bdc445b5997307/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ab44aa2297df4f970e3b122558b75d3b3af5410dce7ea1efa8bb6b3cbf5b143/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ab44aa2297df4f970e3b122558b75d3b3af5410dce7ea1efa8bb6b3cbf5b143/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ab44aa2297df4f970e3b122558b75d3b3af5410dce7ea1efa8bb6b3cbf5b143/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-580000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-580000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-580000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-580000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-580000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ca97d968d3a768af6d651933bd226e78bb31eb624d6172b6311c128d57e1b3a",
	            "SandboxKey": "/var/run/docker/netns/0ca97d968d3a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50583"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50584"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50585"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50581"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50582"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-580000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f832b0f68af2",
	                        "ingress-addon-legacy-580000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "36d76060fb93f6b69f3df4154e14861fe2c46871f04e9dc11292fa5a8173909b",
	                    "EndpointID": "5ba69d5117ccc6f1901124e4bbaf5639369178a25edeec006e8483e772fdf010",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-580000",
	                        "f832b0f68af2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-580000 -n ingress-addon-legacy-580000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-580000 -n ingress-addon-legacy-580000: exit status 6 (385.365734ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 09:58:55.238766    4483 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-580000" does not appear in /Users/jenkins/minikube-integration/18259-932/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-580000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (119.55s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (117.41s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-580000 addons enable ingress-dns --alsologtostderr -v=5
E0229 09:59:18.995534    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:59:50.616199    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 10:00:18.308345    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-580000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m56.960526436s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 09:58:55.320805    4493 out.go:291] Setting OutFile to fd 1 ...
	I0229 09:58:55.321668    4493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:58:55.321678    4493 out.go:304] Setting ErrFile to fd 2...
	I0229 09:58:55.321685    4493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:58:55.322264    4493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 09:58:55.322625    4493 mustload.go:65] Loading cluster: ingress-addon-legacy-580000
	I0229 09:58:55.322890    4493 config.go:182] Loaded profile config "ingress-addon-legacy-580000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 09:58:55.322909    4493 addons.go:597] checking whether the cluster is paused
	I0229 09:58:55.322988    4493 config.go:182] Loaded profile config "ingress-addon-legacy-580000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 09:58:55.323005    4493 host.go:66] Checking if "ingress-addon-legacy-580000" exists ...
	I0229 09:58:55.323358    4493 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-580000 --format={{.State.Status}}
	I0229 09:58:55.378933    4493 ssh_runner.go:195] Run: systemctl --version
	I0229 09:58:55.379011    4493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:58:55.430754    4493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50583 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/ingress-addon-legacy-580000/id_rsa Username:docker}
	I0229 09:58:55.516341    4493 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 09:58:55.554229    4493 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 09:58:55.576278    4493 config.go:182] Loaded profile config "ingress-addon-legacy-580000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 09:58:55.576303    4493 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-580000"
	I0229 09:58:55.576315    4493 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-580000"
	I0229 09:58:55.576370    4493 host.go:66] Checking if "ingress-addon-legacy-580000" exists ...
	I0229 09:58:55.576969    4493 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-580000 --format={{.State.Status}}
	I0229 09:58:55.648742    4493 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0229 09:58:55.670157    4493 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0229 09:58:55.692169    4493 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 09:58:55.692204    4493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0229 09:58:55.692356    4493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-580000
	I0229 09:58:55.743563    4493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50583 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/ingress-addon-legacy-580000/id_rsa Username:docker}
	I0229 09:58:55.851229    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 09:58:55.906993    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:55.907019    4493 retry.go:31] will retry after 301.93632ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:56.209111    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 09:58:56.265271    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:56.265291    4493 retry.go:31] will retry after 445.906652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:56.712004    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 09:58:56.770595    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:56.770621    4493 retry.go:31] will retry after 590.707048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:57.361891    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 09:58:57.418889    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:57.418911    4493 retry.go:31] will retry after 917.321791ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:58.337746    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 09:58:58.390859    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:58.390876    4493 retry.go:31] will retry after 1.232850832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:59.623874    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 09:58:59.680582    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:58:59.680601    4493 retry.go:31] will retry after 2.806898372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:59:02.487908    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 09:59:02.549524    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:59:02.549541    4493 retry.go:31] will retry after 3.295412902s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:59:05.846565    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 09:59:05.903868    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:59:05.903887    4493 retry.go:31] will retry after 2.209631191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:59:08.114931    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 09:59:08.223628    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:59:08.223646    4493 retry.go:31] will retry after 7.683898493s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:59:15.907643    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 09:59:15.966586    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:59:15.966606    4493 retry.go:31] will retry after 11.446764527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:59:27.413682    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 09:59:27.472765    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:59:27.472785    4493 retry.go:31] will retry after 20.642636055s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:59:48.115277    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 09:59:48.172132    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 09:59:48.172148    4493 retry.go:31] will retry after 31.032524082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 10:00:19.204447    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 10:00:19.262174    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 10:00:19.262204    4493 retry.go:31] will retry after 32.789727738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 10:00:52.052037    4493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 10:00:52.121033    4493 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 10:00:52.143121    4493 out.go:177] 
	W0229 10:00:52.165158    4493 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0229 10:00:52.165191    4493 out.go:239] * 
	* 
	W0229 10:00:52.168846    4493 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 10:00:52.190945    4493 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-580000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-580000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125",
	        "Created": "2024-02-29T17:52:41.540717298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 50257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T17:52:41.758558303Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125/hostname",
	        "HostsPath": "/var/lib/docker/containers/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125/hosts",
	        "LogPath": "/var/lib/docker/containers/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125-json.log",
	        "Name": "/ingress-addon-legacy-580000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-580000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-580000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5ab44aa2297df4f970e3b122558b75d3b3af5410dce7ea1efa8bb6b3cbf5b143-init/diff:/var/lib/docker/overlay2/27fbcff5021de980a082cd343434b8923388c3122a97247e81bdc445b5997307/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ab44aa2297df4f970e3b122558b75d3b3af5410dce7ea1efa8bb6b3cbf5b143/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ab44aa2297df4f970e3b122558b75d3b3af5410dce7ea1efa8bb6b3cbf5b143/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ab44aa2297df4f970e3b122558b75d3b3af5410dce7ea1efa8bb6b3cbf5b143/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-580000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-580000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-580000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-580000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-580000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ca97d968d3a768af6d651933bd226e78bb31eb624d6172b6311c128d57e1b3a",
	            "SandboxKey": "/var/run/docker/netns/0ca97d968d3a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50583"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50584"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50585"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50581"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50582"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-580000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f832b0f68af2",
	                        "ingress-addon-legacy-580000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "36d76060fb93f6b69f3df4154e14861fe2c46871f04e9dc11292fa5a8173909b",
	                    "EndpointID": "5ba69d5117ccc6f1901124e4bbaf5639369178a25edeec006e8483e772fdf010",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-580000",
	                        "f832b0f68af2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-580000 -n ingress-addon-legacy-580000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-580000 -n ingress-addon-legacy-580000: exit status 6 (394.113476ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:00:52.645191    4534 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-580000" does not appear in /Users/jenkins/minikube-integration/18259-932/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-580000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (117.41s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-580000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-580000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125",
	        "Created": "2024-02-29T17:52:41.540717298Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 50257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T17:52:41.758558303Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125/hostname",
	        "HostsPath": "/var/lib/docker/containers/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125/hosts",
	        "LogPath": "/var/lib/docker/containers/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125/f832b0f68af23cb1e06ad3bd638807b1bb6f9bf9df72de6cd9049fb37cee4125-json.log",
	        "Name": "/ingress-addon-legacy-580000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-580000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-580000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5ab44aa2297df4f970e3b122558b75d3b3af5410dce7ea1efa8bb6b3cbf5b143-init/diff:/var/lib/docker/overlay2/27fbcff5021de980a082cd343434b8923388c3122a97247e81bdc445b5997307/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ab44aa2297df4f970e3b122558b75d3b3af5410dce7ea1efa8bb6b3cbf5b143/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ab44aa2297df4f970e3b122558b75d3b3af5410dce7ea1efa8bb6b3cbf5b143/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ab44aa2297df4f970e3b122558b75d3b3af5410dce7ea1efa8bb6b3cbf5b143/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-580000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-580000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-580000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-580000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-580000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ca97d968d3a768af6d651933bd226e78bb31eb624d6172b6311c128d57e1b3a",
	            "SandboxKey": "/var/run/docker/netns/0ca97d968d3a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50583"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50584"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50585"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50581"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50582"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-580000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f832b0f68af2",
	                        "ingress-addon-legacy-580000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "36d76060fb93f6b69f3df4154e14861fe2c46871f04e9dc11292fa5a8173909b",
	                    "EndpointID": "5ba69d5117ccc6f1901124e4bbaf5639369178a25edeec006e8483e772fdf010",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-580000",
	                        "f832b0f68af2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-580000 -n ingress-addon-legacy-580000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-580000 -n ingress-addon-legacy-580000: exit status 6 (393.375821ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:00:53.089362    4546 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-580000" does not appear in /Users/jenkins/minikube-integration/18259-932/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-580000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (870.74s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-409000 ssh -- ls /minikube-host
E0229 10:05:42.038287    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:09:18.986866    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:09:50.607990    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 10:11:13.659654    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 10:14:19.017022    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:14:50.638205    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 10:19:19.013941    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:19:50.633596    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-409000 ssh -- ls /minikube-host: signal: killed (14m30.293259617s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-409000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-409000
helpers_test.go:235: (dbg) docker inspect mount-start-2-409000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24082ce86eacf23f8366662a8e381d2290ea22e5fa9e7e53ecbd20776e8efea9",
	        "Created": "2024-02-29T18:05:03.248613049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 106444,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-29T18:05:15.075312917Z",
	            "FinishedAt": "2024-02-29T18:05:12.302056262Z"
	        },
	        "Image": "sha256:a5b872dc86053f77fb58d93168e89c4b0fa5961a7ed628d630f6cd6decd7bca0",
	        "ResolvConfPath": "/var/lib/docker/containers/24082ce86eacf23f8366662a8e381d2290ea22e5fa9e7e53ecbd20776e8efea9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24082ce86eacf23f8366662a8e381d2290ea22e5fa9e7e53ecbd20776e8efea9/hostname",
	        "HostsPath": "/var/lib/docker/containers/24082ce86eacf23f8366662a8e381d2290ea22e5fa9e7e53ecbd20776e8efea9/hosts",
	        "LogPath": "/var/lib/docker/containers/24082ce86eacf23f8366662a8e381d2290ea22e5fa9e7e53ecbd20776e8efea9/24082ce86eacf23f8366662a8e381d2290ea22e5fa9e7e53ecbd20776e8efea9-json.log",
	        "Name": "/mount-start-2-409000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-409000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-409000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d4501273d30cb452e6d131765a4315ecdf65174ae6e9dc8912af831da83b8586-init/diff:/var/lib/docker/overlay2/27fbcff5021de980a082cd343434b8923388c3122a97247e81bdc445b5997307/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d4501273d30cb452e6d131765a4315ecdf65174ae6e9dc8912af831da83b8586/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d4501273d30cb452e6d131765a4315ecdf65174ae6e9dc8912af831da83b8586/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d4501273d30cb452e6d131765a4315ecdf65174ae6e9dc8912af831da83b8586/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-409000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-409000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-409000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-409000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-409000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ecd3e8176454dce27e122ea392f63649459742e5317dce50831b695806569f5d",
	            "SandboxKey": "/var/run/docker/netns/ecd3e8176454",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50930"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50926"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50927"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50928"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50929"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-409000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "24082ce86eac",
	                        "mount-start-2-409000"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "8bd5bb226166634b2077d833be4648e91ee352f9a02dcd5e834ada30c2f88b36",
	                    "EndpointID": "9ea8dac2f2a86a888e331948e204857af7f7c6302c3162f641f55077399cb48e",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "mount-start-2-409000",
	                        "24082ce86eac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-409000 -n mount-start-2-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-409000 -n mount-start-2-409000: exit status 6 (388.247263ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:19:53.292747    6473 status.go:415] kubeconfig endpoint: extract IP: "mount-start-2-409000" does not appear in /Users/jenkins/minikube-integration/18259-932/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-409000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountPostStop (870.74s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (750.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-636000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0229 10:22:22.059737    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:24:19.009762    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:24:50.630546    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 10:27:53.680142    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 10:29:19.004895    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:29:50.626432    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-636000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m30.528811522s)

                                                
                                                
-- stdout --
	* [multinode-636000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node multinode-636000 in cluster multinode-636000
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-636000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 10:21:02.357475    6570 out.go:291] Setting OutFile to fd 1 ...
	I0229 10:21:02.358068    6570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:21:02.358080    6570 out.go:304] Setting ErrFile to fd 2...
	I0229 10:21:02.358086    6570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:21:02.358399    6570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 10:21:02.379594    6570 out.go:298] Setting JSON to false
	I0229 10:21:02.402038    6570 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3032,"bootTime":1709227830,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0229 10:21:02.402141    6570 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 10:21:02.423228    6570 out.go:177] * [multinode-636000] minikube v1.32.0 on Darwin 14.3.1
	I0229 10:21:02.465617    6570 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 10:21:02.465688    6570 notify.go:220] Checking for updates...
	I0229 10:21:02.508429    6570 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	I0229 10:21:02.550306    6570 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0229 10:21:02.571372    6570 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 10:21:02.592421    6570 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	I0229 10:21:02.613180    6570 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 10:21:02.634964    6570 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 10:21:02.691792    6570 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 10:21:02.691959    6570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 10:21:02.792772    6570 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-29 18:21:02.782493928 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 10:21:02.835541    6570 out.go:177] * Using the docker driver based on user configuration
	I0229 10:21:02.856781    6570 start.go:299] selected driver: docker
	I0229 10:21:02.856809    6570 start.go:903] validating driver "docker" against <nil>
	I0229 10:21:02.856823    6570 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 10:21:02.861259    6570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 10:21:02.960588    6570 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:115 SystemTime:2024-02-29 18:21:02.951049315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 10:21:02.960801    6570 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 10:21:02.960977    6570 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 10:21:02.982164    6570 out.go:177] * Using Docker Desktop driver with root privileges
	I0229 10:21:03.003717    6570 cni.go:84] Creating CNI manager for ""
	I0229 10:21:03.003742    6570 cni.go:136] 0 nodes found, recommending kindnet
	I0229 10:21:03.003751    6570 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0229 10:21:03.003765    6570 start_flags.go:323] config:
	{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-636000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 10:21:03.025683    6570 out.go:177] * Starting control plane node multinode-636000 in cluster multinode-636000
	I0229 10:21:03.067566    6570 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 10:21:03.088837    6570 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 10:21:03.130772    6570 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 10:21:03.130815    6570 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 10:21:03.130846    6570 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 10:21:03.130867    6570 cache.go:56] Caching tarball of preloaded images
	I0229 10:21:03.131109    6570 preload.go:174] Found /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 10:21:03.131126    6570 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 10:21:03.132406    6570 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/multinode-636000/config.json ...
	I0229 10:21:03.132496    6570 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/multinode-636000/config.json: {Name:mk3a0bbfc816e7dde21d389e00e5258fc618caa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 10:21:03.180889    6570 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 10:21:03.180912    6570 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 10:21:03.180926    6570 cache.go:194] Successfully downloaded all kic artifacts
	I0229 10:21:03.180964    6570 start.go:365] acquiring machines lock for multinode-636000: {Name:mk724cab9aafa05d3a600dc983677a58b42dc1e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 10:21:03.181107    6570 start.go:369] acquired machines lock for "multinode-636000" in 131.63µs
	I0229 10:21:03.181131    6570 start.go:93] Provisioning new machine with config: &{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-636000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 10:21:03.181191    6570 start.go:125] createHost starting for "" (driver="docker")
	I0229 10:21:03.224925    6570 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0229 10:21:03.225289    6570 start.go:159] libmachine.API.Create for "multinode-636000" (driver="docker")
	I0229 10:21:03.225343    6570 client.go:168] LocalClient.Create starting
	I0229 10:21:03.225522    6570 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem
	I0229 10:21:03.225616    6570 main.go:141] libmachine: Decoding PEM data...
	I0229 10:21:03.225642    6570 main.go:141] libmachine: Parsing certificate...
	I0229 10:21:03.225734    6570 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/cert.pem
	I0229 10:21:03.225803    6570 main.go:141] libmachine: Decoding PEM data...
	I0229 10:21:03.225831    6570 main.go:141] libmachine: Parsing certificate...
	I0229 10:21:03.226722    6570 cli_runner.go:164] Run: docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 10:21:03.277029    6570 cli_runner.go:211] docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 10:21:03.277124    6570 network_create.go:281] running [docker network inspect multinode-636000] to gather additional debugging logs...
	I0229 10:21:03.277150    6570 cli_runner.go:164] Run: docker network inspect multinode-636000
	W0229 10:21:03.326394    6570 cli_runner.go:211] docker network inspect multinode-636000 returned with exit code 1
	I0229 10:21:03.326425    6570 network_create.go:284] error running [docker network inspect multinode-636000]: docker network inspect multinode-636000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-636000 not found
	I0229 10:21:03.326439    6570 network_create.go:286] output of [docker network inspect multinode-636000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-636000 not found
	
	** /stderr **
	I0229 10:21:03.326561    6570 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 10:21:03.377277    6570 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 10:21:03.377657    6570 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00232d460}
	I0229 10:21:03.377680    6570 network_create.go:124] attempt to create docker network multinode-636000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0229 10:21:03.377748    6570 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-636000 multinode-636000
	I0229 10:21:03.462706    6570 network_create.go:108] docker network multinode-636000 192.168.58.0/24 created
	I0229 10:21:03.462744    6570 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-636000" container
	I0229 10:21:03.462859    6570 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 10:21:03.512596    6570 cli_runner.go:164] Run: docker volume create multinode-636000 --label name.minikube.sigs.k8s.io=multinode-636000 --label created_by.minikube.sigs.k8s.io=true
	I0229 10:21:03.563064    6570 oci.go:103] Successfully created a docker volume multinode-636000
	I0229 10:21:03.563195    6570 cli_runner.go:164] Run: docker run --rm --name multinode-636000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-636000 --entrypoint /usr/bin/test -v multinode-636000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 10:21:03.946019    6570 oci.go:107] Successfully prepared a docker volume multinode-636000
	I0229 10:21:03.946078    6570 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 10:21:03.946090    6570 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 10:21:03.946197    6570 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-636000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 10:27:03.223042    6570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 10:27:03.223206    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:27:03.274290    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:27:03.274396    6570 retry.go:31] will retry after 201.013129ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:03.477807    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:27:03.528026    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:27:03.528130    6570 retry.go:31] will retry after 397.520302ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:03.927015    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:27:03.977408    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:27:03.977519    6570 retry.go:31] will retry after 663.684727ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:04.643268    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:27:04.693766    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:27:04.693877    6570 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:27:04.693898    6570 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:04.693957    6570 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 10:27:04.694011    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:27:04.743020    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:27:04.743110    6570 retry.go:31] will retry after 137.638805ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:04.882503    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:27:04.932521    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:27:04.932625    6570 retry.go:31] will retry after 556.186723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:05.490282    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:27:05.539663    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:27:05.539754    6570 retry.go:31] will retry after 469.01039ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:06.011273    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:27:06.061722    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:27:06.061819    6570 retry.go:31] will retry after 557.820327ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:06.619879    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:27:06.668529    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:27:06.668627    6570 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:27:06.668643    6570 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:06.668661    6570 start.go:128] duration metric: createHost completed in 6m3.492583737s
	I0229 10:27:06.668667    6570 start.go:83] releasing machines lock for "multinode-636000", held for 6m3.492677959s
	W0229 10:27:06.668680    6570 start.go:694] error starting host: creating host: create host timed out in 360.000000 seconds
	I0229 10:27:06.669115    6570 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:27:06.718352    6570 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:27:06.718400    6570 delete.go:82] Unable to get host status for multinode-636000, assuming it has already been deleted: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	W0229 10:27:06.718481    6570 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I0229 10:27:06.718493    6570 start.go:709] Will try again in 5 seconds ...
	I0229 10:27:11.719710    6570 start.go:365] acquiring machines lock for multinode-636000: {Name:mk724cab9aafa05d3a600dc983677a58b42dc1e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 10:27:11.719860    6570 start.go:369] acquired machines lock for "multinode-636000" in 115.712µs
	I0229 10:27:11.719889    6570 start.go:96] Skipping create...Using existing machine configuration
	I0229 10:27:11.719901    6570 fix.go:54] fixHost starting: 
	I0229 10:27:11.720199    6570 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:27:11.769663    6570 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:27:11.769710    6570 fix.go:102] recreateIfNeeded on multinode-636000: state= err=unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:11.769730    6570 fix.go:107] machineExists: false. err=machine does not exist
	I0229 10:27:11.791494    6570 out.go:177] * docker "multinode-636000" container is missing, will recreate.
	I0229 10:27:11.834112    6570 delete.go:124] DEMOLISHING multinode-636000 ...
	I0229 10:27:11.834225    6570 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:27:11.884063    6570 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	W0229 10:27:11.884109    6570 stop.go:75] unable to get state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:11.884128    6570 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:11.884497    6570 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:27:11.933526    6570 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:27:11.933574    6570 delete.go:82] Unable to get host status for multinode-636000, assuming it has already been deleted: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:11.933660    6570 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-636000
	W0229 10:27:11.981949    6570 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-636000 returned with exit code 1
	I0229 10:27:11.981981    6570 kic.go:371] could not find the container multinode-636000 to remove it. will try anyways
	I0229 10:27:11.982051    6570 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:27:12.031525    6570 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	W0229 10:27:12.031578    6570 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:12.031674    6570 cli_runner.go:164] Run: docker exec --privileged -t multinode-636000 /bin/bash -c "sudo init 0"
	W0229 10:27:12.081155    6570 cli_runner.go:211] docker exec --privileged -t multinode-636000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0229 10:27:12.081194    6570 oci.go:650] error shutdown multinode-636000: docker exec --privileged -t multinode-636000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:13.081516    6570 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:27:13.134170    6570 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:27:13.134215    6570 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:13.134226    6570 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:27:13.134251    6570 retry.go:31] will retry after 597.541102ms: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:13.732223    6570 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:27:13.781761    6570 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:27:13.781804    6570 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:13.781818    6570 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:27:13.781845    6570 retry.go:31] will retry after 919.837459ms: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:14.702033    6570 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:27:14.751162    6570 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:27:14.751206    6570 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:14.751217    6570 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:27:14.751240    6570 retry.go:31] will retry after 681.526807ms: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:15.433293    6570 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:27:15.482988    6570 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:27:15.483031    6570 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:15.483042    6570 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:27:15.483068    6570 retry.go:31] will retry after 1.820245459s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:17.304895    6570 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:27:17.377636    6570 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:27:17.377676    6570 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:17.377686    6570 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:27:17.377711    6570 retry.go:31] will retry after 2.412133021s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:19.790041    6570 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:27:19.839595    6570 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:27:19.839640    6570 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:19.839653    6570 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:27:19.839680    6570 retry.go:31] will retry after 4.905056105s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:24.745563    6570 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:27:24.796387    6570 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:27:24.796430    6570 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:27:24.796440    6570 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:27:24.796470    6570 oci.go:88] couldn't shut down multinode-636000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	 
	I0229 10:27:24.796543    6570 cli_runner.go:164] Run: docker rm -f -v multinode-636000
	I0229 10:27:24.847317    6570 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-636000
	W0229 10:27:24.895954    6570 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-636000 returned with exit code 1
	I0229 10:27:24.896064    6570 cli_runner.go:164] Run: docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 10:27:24.945637    6570 cli_runner.go:164] Run: docker network rm multinode-636000
	I0229 10:27:25.053079    6570 fix.go:114] Sleeping 1 second for extra luck!
	I0229 10:27:26.053227    6570 start.go:125] createHost starting for "" (driver="docker")
	I0229 10:27:26.076304    6570 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0229 10:27:26.076468    6570 start.go:159] libmachine.API.Create for "multinode-636000" (driver="docker")
	I0229 10:27:26.076517    6570 client.go:168] LocalClient.Create starting
	I0229 10:27:26.076718    6570 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem
	I0229 10:27:26.076809    6570 main.go:141] libmachine: Decoding PEM data...
	I0229 10:27:26.076834    6570 main.go:141] libmachine: Parsing certificate...
	I0229 10:27:26.076914    6570 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/cert.pem
	I0229 10:27:26.076983    6570 main.go:141] libmachine: Decoding PEM data...
	I0229 10:27:26.077010    6570 main.go:141] libmachine: Parsing certificate...
	I0229 10:27:26.078025    6570 cli_runner.go:164] Run: docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 10:27:26.127880    6570 cli_runner.go:211] docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 10:27:26.127968    6570 network_create.go:281] running [docker network inspect multinode-636000] to gather additional debugging logs...
	I0229 10:27:26.127985    6570 cli_runner.go:164] Run: docker network inspect multinode-636000
	W0229 10:27:26.183439    6570 cli_runner.go:211] docker network inspect multinode-636000 returned with exit code 1
	I0229 10:27:26.183494    6570 network_create.go:284] error running [docker network inspect multinode-636000]: docker network inspect multinode-636000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-636000 not found
	I0229 10:27:26.183513    6570 network_create.go:286] output of [docker network inspect multinode-636000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-636000 not found
	
	** /stderr **
	I0229 10:27:26.183751    6570 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 10:27:26.264328    6570 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 10:27:26.265946    6570 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 10:27:26.266663    6570 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00232d930}
	I0229 10:27:26.266686    6570 network_create.go:124] attempt to create docker network multinode-636000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0229 10:27:26.266793    6570 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-636000 multinode-636000
	W0229 10:27:26.326518    6570 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-636000 multinode-636000 returned with exit code 1
	W0229 10:27:26.326552    6570 network_create.go:149] failed to create docker network multinode-636000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-636000 multinode-636000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0229 10:27:26.326574    6570 network_create.go:116] failed to create docker network multinode-636000 192.168.67.0/24, will retry: subnet is taken
	I0229 10:27:26.328003    6570 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 10:27:26.328462    6570 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023f01a0}
	I0229 10:27:26.328476    6570 network_create.go:124] attempt to create docker network multinode-636000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0229 10:27:26.328555    6570 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-636000 multinode-636000
	I0229 10:27:26.416685    6570 network_create.go:108] docker network multinode-636000 192.168.76.0/24 created
	I0229 10:27:26.416725    6570 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-636000" container
	I0229 10:27:26.416825    6570 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 10:27:26.467878    6570 cli_runner.go:164] Run: docker volume create multinode-636000 --label name.minikube.sigs.k8s.io=multinode-636000 --label created_by.minikube.sigs.k8s.io=true
	I0229 10:27:26.518684    6570 oci.go:103] Successfully created a docker volume multinode-636000
	I0229 10:27:26.518801    6570 cli_runner.go:164] Run: docker run --rm --name multinode-636000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-636000 --entrypoint /usr/bin/test -v multinode-636000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 10:27:26.806554    6570 oci.go:107] Successfully prepared a docker volume multinode-636000
	I0229 10:27:26.806586    6570 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 10:27:26.806599    6570 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 10:27:26.806702    6570 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-636000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 10:33:26.124052    6570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 10:33:26.127036    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:26.182463    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:26.182573    6570 retry.go:31] will retry after 279.640689ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:26.464616    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:26.516058    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:26.516170    6570 retry.go:31] will retry after 492.14109ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:27.009124    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:27.062341    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:27.062452    6570 retry.go:31] will retry after 528.986984ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:27.591972    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:27.642449    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:33:27.642551    6570 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:33:27.642569    6570 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:27.642633    6570 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 10:33:27.642684    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:27.692233    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:27.692328    6570 retry.go:31] will retry after 220.508213ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:27.913323    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:27.966769    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:27.966869    6570 retry.go:31] will retry after 200.741567ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:28.169958    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:28.220234    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:28.220330    6570 retry.go:31] will retry after 774.174505ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:28.996764    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:29.050123    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:33:29.050231    6570 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:33:29.050250    6570 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:29.050266    6570 start.go:128] duration metric: createHost completed in 6m2.951815076s
	I0229 10:33:29.050331    6570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 10:33:29.050382    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:29.100956    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:29.101047    6570 retry.go:31] will retry after 166.397953ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:29.269861    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:29.320420    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:29.320515    6570 retry.go:31] will retry after 520.938837ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:29.842954    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:29.899924    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:29.900019    6570 retry.go:31] will retry after 679.2413ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:30.579878    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:30.630109    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:33:30.630206    6570 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:33:30.630238    6570 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:30.630293    6570 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 10:33:30.630344    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:30.679021    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:30.679123    6570 retry.go:31] will retry after 134.0655ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:30.813721    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:30.866164    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:30.866259    6570 retry.go:31] will retry after 395.98229ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:31.264404    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:31.316125    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:31.316221    6570 retry.go:31] will retry after 757.274841ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:32.075805    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:32.125614    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:33:32.125715    6570 retry.go:31] will retry after 501.967216ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:32.628118    6570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:33:32.679461    6570 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:33:32.679564    6570 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:33:32.679579    6570 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:33:32.679591    6570 fix.go:56] fixHost completed within 6m20.91473436s
	I0229 10:33:32.679598    6570 start.go:83] releasing machines lock for "multinode-636000", held for 6m20.914770478s
	W0229 10:33:32.679677    6570 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-636000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-636000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0229 10:33:32.722869    6570 out.go:177] 
	W0229 10:33:32.744015    6570 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0229 10:33:32.744040    6570 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0229 10:33:32.744053    6570 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0229 10:33:32.765054    6570 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-636000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "00c1996055aa3b2ea80cdc2338072a3b25a3d3a7a9dbbeb8ec64b5100d2e2cd8",
	        "Created": "2024-02-29T18:27:26.377401079Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (118.154591ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:33:33.010958    6866 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (750.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (103.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:509: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (101.464173ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-636000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:511: failed to create busybox deployment to multinode cluster
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- rollout status deployment/busybox: exit status 1 (99.567383ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.069356ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.524776ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.514699ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.798949ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.388999ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.561376ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.078399ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.162487ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E0229 10:34:19.050864    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.803341ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.122698ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E0229 10:34:50.671960    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.067982ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:540: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:544: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (99.328074ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:546: failed get Pod names
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- exec  -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- exec  -- nslookup kubernetes.io: exit status 1 (99.682364ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:554: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- exec  -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- exec  -- nslookup kubernetes.default: exit status 1 (99.002289ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:564: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (99.124314ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:572: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "00c1996055aa3b2ea80cdc2338072a3b25a3d3a7a9dbbeb8ec64b5100d2e2cd8",
	        "Created": "2024-02-29T18:27:26.377401079Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (114.460981ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:35:16.273340    6930 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (103.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:580: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-636000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (99.270582ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-636000"

                                                
                                                
** /stderr **
multinode_test.go:582: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "00c1996055aa3b2ea80cdc2338072a3b25a3d3a7a9dbbeb8ec64b5100d2e2cd8",
	        "Created": "2024-02-29T18:27:26.377401079Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (115.220738ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:35:16.541936    6939 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-636000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-636000 -v 3 --alsologtostderr: exit status 80 (207.284561ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 10:35:16.605360    6943 out.go:291] Setting OutFile to fd 1 ...
	I0229 10:35:16.605622    6943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:35:16.605628    6943 out.go:304] Setting ErrFile to fd 2...
	I0229 10:35:16.605631    6943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:35:16.605811    6943 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 10:35:16.606132    6943 mustload.go:65] Loading cluster: multinode-636000
	I0229 10:35:16.606427    6943 config.go:182] Loaded profile config "multinode-636000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 10:35:16.606792    6943 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:16.656853    6943 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:16.680890    6943 out.go:177] 
	W0229 10:35:16.702477    6943 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:35:16.702497    6943 out.go:239] * 
	* 
	W0229 10:35:16.705459    6943 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 10:35:16.726378    6943 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-636000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "00c1996055aa3b2ea80cdc2338072a3b25a3d3a7a9dbbeb8ec64b5100d2e2cd8",
	        "Created": "2024-02-29T18:27:26.377401079Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (114.080873ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:35:16.918019    6949 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-636000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:211: (dbg) Non-zero exit: kubectl --context multinode-636000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (39.295749ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-636000

                                                
                                                
** /stderr **
multinode_test.go:213: failed to 'kubectl get nodes' with args "kubectl --context multinode-636000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:220: failed to decode json from label list: args "kubectl --context multinode-636000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "00c1996055aa3b2ea80cdc2338072a3b25a3d3a7a9dbbeb8ec64b5100d2e2cd8",
	        "Created": "2024-02-29T18:27:26.377401079Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (112.859221ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:35:17.124093    6956 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:156: expected profile "multinode-636000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-409000\",\"Status\":\"\",\"Config\":null,\"Active\":false}],\"valid\":[{\"Name\":\"multinode-636000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-636000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KV
MNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-636000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\
"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"
GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "00c1996055aa3b2ea80cdc2338072a3b25a3d3a7a9dbbeb8ec64b5100d2e2cd8",
	        "Created": "2024-02-29T18:27:26.377401079Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (114.088794ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:35:17.535904    6968 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-636000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-636000 status --output json --alsologtostderr: exit status 7 (114.982561ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-636000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 10:35:17.599159    6972 out.go:291] Setting OutFile to fd 1 ...
	I0229 10:35:17.599352    6972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:35:17.599357    6972 out.go:304] Setting ErrFile to fd 2...
	I0229 10:35:17.599361    6972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:35:17.599570    6972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 10:35:17.599758    6972 out.go:298] Setting JSON to true
	I0229 10:35:17.599779    6972 mustload.go:65] Loading cluster: multinode-636000
	I0229 10:35:17.599815    6972 notify.go:220] Checking for updates...
	I0229 10:35:17.600056    6972 config.go:182] Loaded profile config "multinode-636000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 10:35:17.600072    6972 status.go:255] checking status of multinode-636000 ...
	I0229 10:35:17.600438    6972 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:17.651033    6972 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:17.651100    6972 status.go:330] multinode-636000 host status = "" (err=state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	)
	I0229 10:35:17.651124    6972 status.go:257] multinode-636000 status: &{Name:multinode-636000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0229 10:35:17.651144    6972 status.go:260] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	E0229 10:35:17.651151    6972 status.go:263] The "multinode-636000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:181: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-636000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "00c1996055aa3b2ea80cdc2338072a3b25a3d3a7a9dbbeb8ec64b5100d2e2cd8",
	        "Created": "2024-02-29T18:27:26.377401079Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (116.048349ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:35:17.820115    6978 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-636000 node stop m03
multinode_test.go:238: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-636000 node stop m03: exit status 85 (155.396197ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:240: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-636000 node stop m03": exit status 85
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-636000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-636000 status: exit status 7 (113.713664ms)

                                                
                                                
-- stdout --
	multinode-636000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:35:18.089938    6984 status.go:260] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	E0229 10:35:18.089950    6984 status.go:263] The "multinode-636000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-636000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-636000 status --alsologtostderr: exit status 7 (113.553557ms)

                                                
                                                
-- stdout --
	multinode-636000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 10:35:18.152114    6988 out.go:291] Setting OutFile to fd 1 ...
	I0229 10:35:18.152302    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:35:18.152307    6988 out.go:304] Setting ErrFile to fd 2...
	I0229 10:35:18.152311    6988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:35:18.152510    6988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 10:35:18.152692    6988 out.go:298] Setting JSON to false
	I0229 10:35:18.152714    6988 mustload.go:65] Loading cluster: multinode-636000
	I0229 10:35:18.153209    6988 notify.go:220] Checking for updates...
	I0229 10:35:18.153987    6988 config.go:182] Loaded profile config "multinode-636000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 10:35:18.154010    6988 status.go:255] checking status of multinode-636000 ...
	I0229 10:35:18.154399    6988 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:18.203473    6988 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:18.203540    6988 status.go:330] multinode-636000 host status = "" (err=state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	)
	I0229 10:35:18.203563    6988 status.go:257] multinode-636000 status: &{Name:multinode-636000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0229 10:35:18.203582    6988 status.go:260] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	E0229 10:35:18.203593    6988 status.go:263] The "multinode-636000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:257: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-636000 status --alsologtostderr": multinode-636000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:261: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-636000 status --alsologtostderr": multinode-636000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:265: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-636000 status --alsologtostderr": multinode-636000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "00c1996055aa3b2ea80cdc2338072a3b25a3d3a7a9dbbeb8ec64b5100d2e2cd8",
	        "Created": "2024-02-29T18:27:26.377401079Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (114.572558ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:35:18.370942    6994 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-636000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-636000 node start m03 --alsologtostderr: exit status 85 (154.290638ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 10:35:18.488219    7000 out.go:291] Setting OutFile to fd 1 ...
	I0229 10:35:18.488994    7000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:35:18.489003    7000 out.go:304] Setting ErrFile to fd 2...
	I0229 10:35:18.489010    7000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:35:18.489631    7000 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 10:35:18.489962    7000 mustload.go:65] Loading cluster: multinode-636000
	I0229 10:35:18.490215    7000 config.go:182] Loaded profile config "multinode-636000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 10:35:18.511482    7000 out.go:177] 
	W0229 10:35:18.532407    7000 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0229 10:35:18.532434    7000 out.go:239] * 
	* 
	W0229 10:35:18.536201    7000 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 10:35:18.557269    7000 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0229 10:35:18.488219    7000 out.go:291] Setting OutFile to fd 1 ...
I0229 10:35:18.488994    7000 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 10:35:18.489003    7000 out.go:304] Setting ErrFile to fd 2...
I0229 10:35:18.489010    7000 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 10:35:18.489631    7000 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
I0229 10:35:18.489962    7000 mustload.go:65] Loading cluster: multinode-636000
I0229 10:35:18.490215    7000 config.go:182] Loaded profile config "multinode-636000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 10:35:18.511482    7000 out.go:177] 
W0229 10:35:18.532407    7000 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0229 10:35:18.532434    7000 out.go:239] * 
* 
W0229 10:35:18.536201    7000 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0229 10:35:18.557269    7000 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-636000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-636000 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-636000 status: exit status 7 (114.666563ms)

                                                
                                                
-- stdout --
	multinode-636000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:35:18.695191    7002 status.go:260] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	E0229 10:35:18.695202    7002 status.go:263] The "multinode-636000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:291: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-636000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "00c1996055aa3b2ea80cdc2338072a3b25a3d3a7a9dbbeb8ec64b5100d2e2cd8",
	        "Created": "2024-02-29T18:27:26.377401079Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (113.955475ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:35:18.861556    7008 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (794.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-636000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-636000
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-636000: exit status 82 (12.649214381s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-636000"  ...
	* Stopping node "multinode-636000"  ...
	* Stopping node "multinode-636000"  ...
	* Stopping node "multinode-636000"  ...
	* Stopping node "multinode-636000"  ...
	* Stopping node "multinode-636000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-636000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-636000" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-636000 --wait=true -v=8 --alsologtostderr
E0229 10:39:02.098799    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:39:19.046718    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:39:50.669436    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 10:44:19.042802    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:44:33.717937    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 10:44:50.664626    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-636000 --wait=true -v=8 --alsologtostderr: exit status 52 (13m1.875240204s)

                                                
                                                
-- stdout --
	* [multinode-636000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-636000 in cluster multinode-636000
	* Pulling base image v0.0.42-1708944392-18244 ...
	* docker "multinode-636000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-636000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 10:35:31.639232    7031 out.go:291] Setting OutFile to fd 1 ...
	I0229 10:35:31.639482    7031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:35:31.639487    7031 out.go:304] Setting ErrFile to fd 2...
	I0229 10:35:31.639490    7031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:35:31.639678    7031 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 10:35:31.641099    7031 out.go:298] Setting JSON to false
	I0229 10:35:31.664373    7031 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3901,"bootTime":1709227830,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0229 10:35:31.664463    7031 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 10:35:31.685871    7031 out.go:177] * [multinode-636000] minikube v1.32.0 on Darwin 14.3.1
	I0229 10:35:31.730106    7031 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 10:35:31.730186    7031 notify.go:220] Checking for updates...
	I0229 10:35:31.753157    7031 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	I0229 10:35:31.781095    7031 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0229 10:35:31.801426    7031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 10:35:31.822694    7031 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	I0229 10:35:31.843436    7031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 10:35:31.865348    7031 config.go:182] Loaded profile config "multinode-636000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 10:35:31.865543    7031 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 10:35:31.921442    7031 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 10:35:31.921582    7031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 10:35:32.020792    7031 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:135 SystemTime:2024-02-29 18:35:32.01123433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 10:35:32.043114    7031 out.go:177] * Using the docker driver based on existing profile
	I0229 10:35:32.066460    7031 start.go:299] selected driver: docker
	I0229 10:35:32.066477    7031 start.go:903] validating driver "docker" against &{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-636000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 10:35:32.066559    7031 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 10:35:32.066725    7031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 10:35:32.169669    7031 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:135 SystemTime:2024-02-29 18:35:32.159488741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 10:35:32.173036    7031 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 10:35:32.173108    7031 cni.go:84] Creating CNI manager for ""
	I0229 10:35:32.173117    7031 cni.go:136] 1 nodes found, recommending kindnet
	I0229 10:35:32.173127    7031 start_flags.go:323] config:
	{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-636000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 10:35:32.194830    7031 out.go:177] * Starting control plane node multinode-636000 in cluster multinode-636000
	I0229 10:35:32.217651    7031 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 10:35:32.238380    7031 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 10:35:32.281670    7031 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 10:35:32.281746    7031 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 10:35:32.281734    7031 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 10:35:32.281770    7031 cache.go:56] Caching tarball of preloaded images
	I0229 10:35:32.282007    7031 preload.go:174] Found /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 10:35:32.282030    7031 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 10:35:32.282887    7031 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/multinode-636000/config.json ...
	I0229 10:35:32.332495    7031 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 10:35:32.332517    7031 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 10:35:32.332538    7031 cache.go:194] Successfully downloaded all kic artifacts
	I0229 10:35:32.332584    7031 start.go:365] acquiring machines lock for multinode-636000: {Name:mk724cab9aafa05d3a600dc983677a58b42dc1e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 10:35:32.332673    7031 start.go:369] acquired machines lock for "multinode-636000" in 70.695µs
	I0229 10:35:32.332695    7031 start.go:96] Skipping create...Using existing machine configuration
	I0229 10:35:32.332705    7031 fix.go:54] fixHost starting: 
	I0229 10:35:32.332934    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:32.383473    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:32.383524    7031 fix.go:102] recreateIfNeeded on multinode-636000: state= err=unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:32.383545    7031 fix.go:107] machineExists: false. err=machine does not exist
	I0229 10:35:32.405617    7031 out.go:177] * docker "multinode-636000" container is missing, will recreate.
	I0229 10:35:32.447238    7031 delete.go:124] DEMOLISHING multinode-636000 ...
	I0229 10:35:32.447423    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:32.498151    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	W0229 10:35:32.498200    7031 stop.go:75] unable to get state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:32.498220    7031 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:32.498578    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:32.547515    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:32.547568    7031 delete.go:82] Unable to get host status for multinode-636000, assuming it has already been deleted: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:32.547654    7031 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-636000
	W0229 10:35:32.597057    7031 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-636000 returned with exit code 1
	I0229 10:35:32.597090    7031 kic.go:371] could not find the container multinode-636000 to remove it. will try anyways
	I0229 10:35:32.597163    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:32.646653    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	W0229 10:35:32.646699    7031 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:32.646790    7031 cli_runner.go:164] Run: docker exec --privileged -t multinode-636000 /bin/bash -c "sudo init 0"
	W0229 10:35:32.695725    7031 cli_runner.go:211] docker exec --privileged -t multinode-636000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0229 10:35:32.695753    7031 oci.go:650] error shutdown multinode-636000: docker exec --privileged -t multinode-636000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:33.696253    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:33.749978    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:33.750025    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:33.750033    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:35:33.750074    7031 retry.go:31] will retry after 597.509734ms: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:34.349565    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:34.402316    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:34.402368    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:34.402377    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:35:34.402402    7031 retry.go:31] will retry after 1.11425808s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:35.518186    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:35.571515    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:35.571559    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:35.571570    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:35:35.571595    7031 retry.go:31] will retry after 853.854459ms: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:36.426164    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:36.479318    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:36.479362    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:36.479374    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:35:36.479400    7031 retry.go:31] will retry after 2.31646784s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:38.798165    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:38.851089    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:38.851134    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:38.851144    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:35:38.851170    7031 retry.go:31] will retry after 2.999982507s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:41.851351    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:41.903892    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:41.903936    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:41.903945    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:35:41.903974    7031 retry.go:31] will retry after 3.29923301s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:45.205300    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:45.254681    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:45.254726    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:45.254734    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:35:45.254759    7031 retry.go:31] will retry after 6.576598252s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:51.832953    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:35:51.887238    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:35:51.887280    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:35:51.887290    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:35:51.887319    7031 oci.go:88] couldn't shut down multinode-636000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	 
	I0229 10:35:51.887393    7031 cli_runner.go:164] Run: docker rm -f -v multinode-636000
	I0229 10:35:51.937073    7031 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-636000
	W0229 10:35:51.986282    7031 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-636000 returned with exit code 1
	I0229 10:35:51.986393    7031 cli_runner.go:164] Run: docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 10:35:52.036067    7031 cli_runner.go:164] Run: docker network rm multinode-636000
	I0229 10:35:52.146486    7031 fix.go:114] Sleeping 1 second for extra luck!
	I0229 10:35:53.148589    7031 start.go:125] createHost starting for "" (driver="docker")
	I0229 10:35:53.170536    7031 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0229 10:35:53.170730    7031 start.go:159] libmachine.API.Create for "multinode-636000" (driver="docker")
	I0229 10:35:53.170783    7031 client.go:168] LocalClient.Create starting
	I0229 10:35:53.170959    7031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem
	I0229 10:35:53.171048    7031 main.go:141] libmachine: Decoding PEM data...
	I0229 10:35:53.171083    7031 main.go:141] libmachine: Parsing certificate...
	I0229 10:35:53.171183    7031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/cert.pem
	I0229 10:35:53.171263    7031 main.go:141] libmachine: Decoding PEM data...
	I0229 10:35:53.171279    7031 main.go:141] libmachine: Parsing certificate...
	I0229 10:35:53.193150    7031 cli_runner.go:164] Run: docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 10:35:53.244658    7031 cli_runner.go:211] docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 10:35:53.244745    7031 network_create.go:281] running [docker network inspect multinode-636000] to gather additional debugging logs...
	I0229 10:35:53.244763    7031 cli_runner.go:164] Run: docker network inspect multinode-636000
	W0229 10:35:53.293786    7031 cli_runner.go:211] docker network inspect multinode-636000 returned with exit code 1
	I0229 10:35:53.293816    7031 network_create.go:284] error running [docker network inspect multinode-636000]: docker network inspect multinode-636000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-636000 not found
	I0229 10:35:53.293827    7031 network_create.go:286] output of [docker network inspect multinode-636000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-636000 not found
	
	** /stderr **
	I0229 10:35:53.293936    7031 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 10:35:53.345426    7031 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 10:35:53.345801    7031 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021dff40}
	I0229 10:35:53.345819    7031 network_create.go:124] attempt to create docker network multinode-636000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0229 10:35:53.345893    7031 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-636000 multinode-636000
	I0229 10:35:53.431814    7031 network_create.go:108] docker network multinode-636000 192.168.58.0/24 created
	I0229 10:35:53.431856    7031 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-636000" container
	I0229 10:35:53.431981    7031 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 10:35:53.481333    7031 cli_runner.go:164] Run: docker volume create multinode-636000 --label name.minikube.sigs.k8s.io=multinode-636000 --label created_by.minikube.sigs.k8s.io=true
	I0229 10:35:53.530364    7031 oci.go:103] Successfully created a docker volume multinode-636000
	I0229 10:35:53.530482    7031 cli_runner.go:164] Run: docker run --rm --name multinode-636000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-636000 --entrypoint /usr/bin/test -v multinode-636000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 10:35:53.831316    7031 oci.go:107] Successfully prepared a docker volume multinode-636000
	I0229 10:35:53.831356    7031 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 10:35:53.831369    7031 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 10:35:53.831475    7031 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-636000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 10:41:53.167040    7031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 10:41:53.167148    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:53.217763    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:53.217885    7031 retry.go:31] will retry after 130.046864ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:53.349884    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:53.400589    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:53.400712    7031 retry.go:31] will retry after 397.881872ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:53.799083    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:53.852845    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:53.852952    7031 retry.go:31] will retry after 752.347489ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:54.605876    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:54.659477    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:41:54.659599    7031 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:41:54.659616    7031 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:54.659683    7031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 10:41:54.659735    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:54.709218    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:54.709316    7031 retry.go:31] will retry after 251.694828ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:54.961765    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:55.011981    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:55.012080    7031 retry.go:31] will retry after 387.108565ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:55.401384    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:55.454308    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:55.454409    7031 retry.go:31] will retry after 383.628692ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:55.838790    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:55.890144    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:55.890236    7031 retry.go:31] will retry after 588.844529ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:56.481413    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:56.532091    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:41:56.532198    7031 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:41:56.532222    7031 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:56.532239    7031 start.go:128] duration metric: createHost completed in 6m3.388194754s
	I0229 10:41:56.532305    7031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 10:41:56.532361    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:56.581366    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:56.581456    7031 retry.go:31] will retry after 271.806125ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:56.854623    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:56.905476    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:56.905568    7031 retry.go:31] will retry after 416.153218ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:57.322484    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:57.414947    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:57.415040    7031 retry.go:31] will retry after 692.308552ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:58.108248    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:58.159748    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:41:58.159850    7031 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:41:58.159866    7031 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:58.159921    7031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 10:41:58.159971    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:58.208505    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:58.208601    7031 retry.go:31] will retry after 337.440902ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:58.548441    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:58.600639    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:58.600729    7031 retry.go:31] will retry after 396.475298ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:58.998395    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:59.049866    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:41:59.049959    7031 retry.go:31] will retry after 472.622184ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:59.523046    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:41:59.577300    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:41:59.577403    7031 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:41:59.577421    7031 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:41:59.577438    7031 fix.go:56] fixHost completed within 6m27.249650082s
	I0229 10:41:59.577444    7031 start.go:83] releasing machines lock for "multinode-636000", held for 6m27.24967862s
	W0229 10:41:59.577459    7031 start.go:694] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0229 10:41:59.577525    7031 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0229 10:41:59.577531    7031 start.go:709] Will try again in 5 seconds ...
	I0229 10:42:04.578733    7031 start.go:365] acquiring machines lock for multinode-636000: {Name:mk724cab9aafa05d3a600dc983677a58b42dc1e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 10:42:04.578927    7031 start.go:369] acquired machines lock for "multinode-636000" in 154.687µs
	I0229 10:42:04.578971    7031 start.go:96] Skipping create...Using existing machine configuration
	I0229 10:42:04.578978    7031 fix.go:54] fixHost starting: 
	I0229 10:42:04.579437    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:42:04.632126    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:42:04.632173    7031 fix.go:102] recreateIfNeeded on multinode-636000: state= err=unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:04.632188    7031 fix.go:107] machineExists: false. err=machine does not exist
	I0229 10:42:04.654034    7031 out.go:177] * docker "multinode-636000" container is missing, will recreate.
	I0229 10:42:04.696755    7031 delete.go:124] DEMOLISHING multinode-636000 ...
	I0229 10:42:04.696931    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:42:04.747722    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	W0229 10:42:04.747768    7031 stop.go:75] unable to get state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:04.747787    7031 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:04.748140    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:42:04.797409    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:42:04.797453    7031 delete.go:82] Unable to get host status for multinode-636000, assuming it has already been deleted: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:04.797525    7031 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-636000
	W0229 10:42:04.846446    7031 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-636000 returned with exit code 1
	I0229 10:42:04.846477    7031 kic.go:371] could not find the container multinode-636000 to remove it. will try anyways
	I0229 10:42:04.846549    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:42:04.895787    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	W0229 10:42:04.895834    7031 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:04.895919    7031 cli_runner.go:164] Run: docker exec --privileged -t multinode-636000 /bin/bash -c "sudo init 0"
	W0229 10:42:04.945370    7031 cli_runner.go:211] docker exec --privileged -t multinode-636000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0229 10:42:04.945398    7031 oci.go:650] error shutdown multinode-636000: docker exec --privileged -t multinode-636000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:05.946066    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:42:05.997063    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:42:05.997110    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:05.997124    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:42:05.997151    7031 retry.go:31] will retry after 703.908357ms: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:06.701890    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:42:06.753248    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:42:06.753298    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:06.753309    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:42:06.753341    7031 retry.go:31] will retry after 813.339792ms: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:07.567171    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:42:07.617172    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:42:07.617217    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:07.617226    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:42:07.617247    7031 retry.go:31] will retry after 1.23539337s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:08.854988    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:42:08.907578    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:42:08.907638    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:08.907648    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:42:08.907674    7031 retry.go:31] will retry after 1.593866189s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:10.502972    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:42:10.556301    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:42:10.556348    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:10.556358    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:42:10.556384    7031 retry.go:31] will retry after 3.008420144s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:13.566311    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:42:13.615744    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:42:13.615787    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:13.615796    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:42:13.615821    7031 retry.go:31] will retry after 4.592774707s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:18.209945    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:42:18.260243    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:42:18.260290    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:18.260299    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:42:18.260320    7031 retry.go:31] will retry after 6.748985199s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:25.009529    7031 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:42:25.061722    7031 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:42:25.061770    7031 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:42:25.061781    7031 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:42:25.061808    7031 oci.go:88] couldn't shut down multinode-636000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	 
	I0229 10:42:25.061880    7031 cli_runner.go:164] Run: docker rm -f -v multinode-636000
	I0229 10:42:25.111528    7031 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-636000
	W0229 10:42:25.160198    7031 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-636000 returned with exit code 1
	I0229 10:42:25.160317    7031 cli_runner.go:164] Run: docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 10:42:25.210177    7031 cli_runner.go:164] Run: docker network rm multinode-636000
	I0229 10:42:25.326204    7031 fix.go:114] Sleeping 1 second for extra luck!
	I0229 10:42:26.326449    7031 start.go:125] createHost starting for "" (driver="docker")
	I0229 10:42:26.349383    7031 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0229 10:42:26.349554    7031 start.go:159] libmachine.API.Create for "multinode-636000" (driver="docker")
	I0229 10:42:26.349585    7031 client.go:168] LocalClient.Create starting
	I0229 10:42:26.349800    7031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem
	I0229 10:42:26.349902    7031 main.go:141] libmachine: Decoding PEM data...
	I0229 10:42:26.349931    7031 main.go:141] libmachine: Parsing certificate...
	I0229 10:42:26.350013    7031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/cert.pem
	I0229 10:42:26.350083    7031 main.go:141] libmachine: Decoding PEM data...
	I0229 10:42:26.350099    7031 main.go:141] libmachine: Parsing certificate...
	I0229 10:42:26.350966    7031 cli_runner.go:164] Run: docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 10:42:26.403581    7031 cli_runner.go:211] docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 10:42:26.403667    7031 network_create.go:281] running [docker network inspect multinode-636000] to gather additional debugging logs...
	I0229 10:42:26.403684    7031 cli_runner.go:164] Run: docker network inspect multinode-636000
	W0229 10:42:26.453124    7031 cli_runner.go:211] docker network inspect multinode-636000 returned with exit code 1
	I0229 10:42:26.453147    7031 network_create.go:284] error running [docker network inspect multinode-636000]: docker network inspect multinode-636000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-636000 not found
	I0229 10:42:26.453162    7031 network_create.go:286] output of [docker network inspect multinode-636000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-636000 not found
	
	** /stderr **
	I0229 10:42:26.453297    7031 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 10:42:26.504306    7031 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 10:42:26.505886    7031 network.go:210] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 10:42:26.506246    7031 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000530300}
	I0229 10:42:26.506260    7031 network_create.go:124] attempt to create docker network multinode-636000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0229 10:42:26.506323    7031 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-636000 multinode-636000
	W0229 10:42:26.556165    7031 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-636000 multinode-636000 returned with exit code 1
	W0229 10:42:26.556215    7031 network_create.go:149] failed to create docker network multinode-636000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-636000 multinode-636000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0229 10:42:26.556232    7031 network_create.go:116] failed to create docker network multinode-636000 192.168.67.0/24, will retry: subnet is taken
	I0229 10:42:26.557664    7031 network.go:210] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 10:42:26.558055    7031 network.go:207] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000802de0}
	I0229 10:42:26.558068    7031 network_create.go:124] attempt to create docker network multinode-636000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0229 10:42:26.558131    7031 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-636000 multinode-636000
	I0229 10:42:26.659387    7031 network_create.go:108] docker network multinode-636000 192.168.76.0/24 created
	I0229 10:42:26.659419    7031 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-636000" container
	I0229 10:42:26.659533    7031 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 10:42:26.709287    7031 cli_runner.go:164] Run: docker volume create multinode-636000 --label name.minikube.sigs.k8s.io=multinode-636000 --label created_by.minikube.sigs.k8s.io=true
	I0229 10:42:26.758340    7031 oci.go:103] Successfully created a docker volume multinode-636000
	I0229 10:42:26.758468    7031 cli_runner.go:164] Run: docker run --rm --name multinode-636000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-636000 --entrypoint /usr/bin/test -v multinode-636000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 10:42:27.062376    7031 oci.go:107] Successfully prepared a docker volume multinode-636000
	I0229 10:42:27.062406    7031 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 10:42:27.062418    7031 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 10:42:27.062520    7031 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-636000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0229 10:48:26.367422    7031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 10:48:26.367564    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:26.418720    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:26.418833    7031 retry.go:31] will retry after 190.058322ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:26.609635    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:26.682930    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:26.683046    7031 retry.go:31] will retry after 299.297406ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:26.982910    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:27.037067    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:27.037184    7031 retry.go:31] will retry after 436.163189ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:27.474155    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:27.525152    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:27.525257    7031 retry.go:31] will retry after 764.507112ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:28.291552    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:28.342040    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:48:28.342141    7031 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:48:28.342163    7031 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:28.342231    7031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 10:48:28.342291    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:28.391463    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:28.391566    7031 retry.go:31] will retry after 151.689628ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:28.543557    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:28.596997    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:28.597092    7031 retry.go:31] will retry after 531.238797ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:29.130779    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:29.181585    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:29.181695    7031 retry.go:31] will retry after 745.626521ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:29.929676    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:29.984195    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:48:29.984301    7031 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:48:29.984318    7031 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:29.984338    7031 start.go:128] duration metric: createHost completed in 6m3.640775732s
	I0229 10:48:29.984403    7031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 10:48:29.984457    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:30.033444    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:30.033538    7031 retry.go:31] will retry after 200.782424ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:30.235525    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:30.286698    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:30.286789    7031 retry.go:31] will retry after 200.181981ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:30.487427    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:30.538147    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:30.538242    7031 retry.go:31] will retry after 753.524591ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:31.292783    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:31.343722    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:31.343828    7031 retry.go:31] will retry after 635.917891ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:31.982198    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:32.032985    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:48:32.033084    7031 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:48:32.033099    7031 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:32.033159    7031 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0229 10:48:32.033222    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:32.082232    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:32.082321    7031 retry.go:31] will retry after 210.99342ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:32.294417    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:32.344425    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:32.344527    7031 retry.go:31] will retry after 357.518416ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:32.703217    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:32.757427    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	I0229 10:48:32.757522    7031 retry.go:31] will retry after 491.536941ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:33.251435    7031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000
	W0229 10:48:33.303396    7031 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000 returned with exit code 1
	W0229 10:48:33.303503    7031 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	W0229 10:48:33.303521    7031 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-636000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-636000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:33.303531    7031 fix.go:56] fixHost completed within 6m28.707667991s
	I0229 10:48:33.303537    7031 start.go:83] releasing machines lock for "multinode-636000", held for 6m28.707712768s
	W0229 10:48:33.303616    7031 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-636000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-636000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I0229 10:48:33.347425    7031 out.go:177] 
	W0229 10:48:33.369314    7031 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W0229 10:48:33.369343    7031 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W0229 10:48:33.369360    7031 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I0229 10:48:33.391302    7031 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-636000" : exit status 52
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-636000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "e3c9265bd7780859b87f657507d5d92acea890ea72dc5bc3891264bd0952965c",
	        "Created": "2024-02-29T18:42:26.604786783Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (116.933252ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:48:33.711304    7472 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (794.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-636000 node delete m03
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-636000 node delete m03: exit status 80 (204.188147ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:424: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-636000 node delete m03": exit status 80
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-636000 status --alsologtostderr
multinode_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-636000 status --alsologtostderr: exit status 7 (114.681834ms)

                                                
                                                
-- stdout --
	multinode-636000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 10:48:33.978543    7480 out.go:291] Setting OutFile to fd 1 ...
	I0229 10:48:33.978834    7480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:48:33.978839    7480 out.go:304] Setting ErrFile to fd 2...
	I0229 10:48:33.978842    7480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:48:33.979023    7480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 10:48:33.979750    7480 out.go:298] Setting JSON to false
	I0229 10:48:33.979778    7480 mustload.go:65] Loading cluster: multinode-636000
	I0229 10:48:33.979925    7480 notify.go:220] Checking for updates...
	I0229 10:48:33.980468    7480 config.go:182] Loaded profile config "multinode-636000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 10:48:33.980494    7480 status.go:255] checking status of multinode-636000 ...
	I0229 10:48:33.980880    7480 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:48:34.030579    7480 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:48:34.030653    7480 status.go:330] multinode-636000 host status = "" (err=state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	)
	I0229 10:48:34.030673    7480 status.go:257] multinode-636000 status: &{Name:multinode-636000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0229 10:48:34.030695    7480 status.go:260] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	E0229 10:48:34.030705    7480 status.go:263] The "multinode-636000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:430: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-636000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "e3c9265bd7780859b87f657507d5d92acea890ea72dc5bc3891264bd0952965c",
	        "Created": "2024-02-29T18:42:26.604786783Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (113.385931ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:48:34.197080    7486 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-636000 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-636000 stop: exit status 82 (16.406156499s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-636000"  ...
	* Stopping node "multinode-636000"  ...
	* Stopping node "multinode-636000"  ...
	* Stopping node "multinode-636000"  ...
	* Stopping node "multinode-636000"  ...
	* Stopping node "multinode-636000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-636000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-636000 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-636000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-636000 status: exit status 7 (113.998163ms)

                                                
                                                
-- stdout --
	multinode-636000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:48:50.717673    7517 status.go:260] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	E0229 10:48:50.717685    7517 status.go:263] The "multinode-636000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-636000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-636000 status --alsologtostderr: exit status 7 (113.631289ms)

                                                
                                                
-- stdout --
	multinode-636000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 10:48:50.780437    7521 out.go:291] Setting OutFile to fd 1 ...
	I0229 10:48:50.780642    7521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:48:50.780648    7521 out.go:304] Setting ErrFile to fd 2...
	I0229 10:48:50.780652    7521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:48:50.780837    7521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 10:48:50.781034    7521 out.go:298] Setting JSON to false
	I0229 10:48:50.781056    7521 mustload.go:65] Loading cluster: multinode-636000
	I0229 10:48:50.781098    7521 notify.go:220] Checking for updates...
	I0229 10:48:50.781331    7521 config.go:182] Loaded profile config "multinode-636000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 10:48:50.781349    7521 status.go:255] checking status of multinode-636000 ...
	I0229 10:48:50.781733    7521 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:48:50.831225    7521 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:48:50.831322    7521 status.go:330] multinode-636000 host status = "" (err=state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	)
	I0229 10:48:50.831345    7521 status.go:257] multinode-636000 status: &{Name:multinode-636000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0229 10:48:50.831369    7521 status.go:260] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	E0229 10:48:50.831377    7521 status.go:263] The "multinode-636000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:361: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-636000 status --alsologtostderr": multinode-636000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:365: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-636000 status --alsologtostderr": multinode-636000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "e3c9265bd7780859b87f657507d5d92acea890ea72dc5bc3891264bd0952965c",
	        "Created": "2024-02-29T18:42:26.604786783Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (113.147401ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:48:50.997903    7527 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (16.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (131.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-636000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0229 10:49:19.062042    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:49:50.682824    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-636000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (2m11.258735424s)

                                                
                                                
-- stdout --
	* [multinode-636000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-636000 in cluster multinode-636000
	* Pulling base image v0.0.42-1708944392-18244 ...
	* docker "multinode-636000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 10:48:51.115543    7533 out.go:291] Setting OutFile to fd 1 ...
	I0229 10:48:51.115802    7533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:48:51.115807    7533 out.go:304] Setting ErrFile to fd 2...
	I0229 10:48:51.115811    7533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 10:48:51.115994    7533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 10:48:51.117391    7533 out.go:298] Setting JSON to false
	I0229 10:48:51.140531    7533 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4701,"bootTime":1709227830,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0229 10:48:51.140632    7533 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 10:48:51.162870    7533 out.go:177] * [multinode-636000] minikube v1.32.0 on Darwin 14.3.1
	I0229 10:48:51.204509    7533 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 10:48:51.204576    7533 notify.go:220] Checking for updates...
	I0229 10:48:51.248536    7533 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	I0229 10:48:51.269653    7533 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0229 10:48:51.311505    7533 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 10:48:51.332595    7533 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	I0229 10:48:51.353592    7533 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 10:48:51.375217    7533 config.go:182] Loaded profile config "multinode-636000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 10:48:51.375929    7533 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 10:48:51.431327    7533 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 10:48:51.431484    7533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 10:48:51.531925    7533 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:false NGoroutines:155 SystemTime:2024-02-29 18:48:51.521120302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 10:48:51.574382    7533 out.go:177] * Using the docker driver based on existing profile
	I0229 10:48:51.595558    7533 start.go:299] selected driver: docker
	I0229 10:48:51.595577    7533 start.go:903] validating driver "docker" against &{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-636000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 10:48:51.595665    7533 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 10:48:51.595841    7533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 10:48:51.696463    7533 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:84 OomKillDisable:false NGoroutines:155 SystemTime:2024-02-29 18:48:51.686740148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 10:48:51.699720    7533 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 10:48:51.699785    7533 cni.go:84] Creating CNI manager for ""
	I0229 10:48:51.699794    7533 cni.go:136] 1 nodes found, recommending kindnet
	I0229 10:48:51.699808    7533 start_flags.go:323] config:
	{Name:multinode-636000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-636000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 10:48:51.743432    7533 out.go:177] * Starting control plane node multinode-636000 in cluster multinode-636000
	I0229 10:48:51.764514    7533 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 10:48:51.786377    7533 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0229 10:48:51.828447    7533 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 10:48:51.828528    7533 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 10:48:51.828549    7533 cache.go:56] Caching tarball of preloaded images
	I0229 10:48:51.828553    7533 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 10:48:51.828778    7533 preload.go:174] Found /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 10:48:51.828812    7533 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 10:48:51.829613    7533 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/multinode-636000/config.json ...
	I0229 10:48:51.879253    7533 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0229 10:48:51.879268    7533 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0229 10:48:51.879294    7533 cache.go:194] Successfully downloaded all kic artifacts
	I0229 10:48:51.879334    7533 start.go:365] acquiring machines lock for multinode-636000: {Name:mk724cab9aafa05d3a600dc983677a58b42dc1e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 10:48:51.879416    7533 start.go:369] acquired machines lock for "multinode-636000" in 62.861µs
	I0229 10:48:51.879444    7533 start.go:96] Skipping create...Using existing machine configuration
	I0229 10:48:51.879454    7533 fix.go:54] fixHost starting: 
	I0229 10:48:51.879674    7533 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:48:51.928875    7533 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:48:51.928947    7533 fix.go:102] recreateIfNeeded on multinode-636000: state= err=unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:51.928968    7533 fix.go:107] machineExists: false. err=machine does not exist
	I0229 10:48:51.950706    7533 out.go:177] * docker "multinode-636000" container is missing, will recreate.
	I0229 10:48:51.972457    7533 delete.go:124] DEMOLISHING multinode-636000 ...
	I0229 10:48:51.972576    7533 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:48:52.022234    7533 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	W0229 10:48:52.022278    7533 stop.go:75] unable to get state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:52.022298    7533 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:52.022644    7533 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:48:52.071789    7533 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:48:52.071838    7533 delete.go:82] Unable to get host status for multinode-636000, assuming it has already been deleted: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:52.071923    7533 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-636000
	W0229 10:48:52.120610    7533 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-636000 returned with exit code 1
	I0229 10:48:52.120645    7533 kic.go:371] could not find the container multinode-636000 to remove it. will try anyways
	I0229 10:48:52.120712    7533 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:48:52.169726    7533 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	W0229 10:48:52.169780    7533 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:52.169864    7533 cli_runner.go:164] Run: docker exec --privileged -t multinode-636000 /bin/bash -c "sudo init 0"
	W0229 10:48:52.218953    7533 cli_runner.go:211] docker exec --privileged -t multinode-636000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0229 10:48:52.218981    7533 oci.go:650] error shutdown multinode-636000: docker exec --privileged -t multinode-636000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:53.219834    7533 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:48:53.272521    7533 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:48:53.272569    7533 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:53.272590    7533 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:48:53.272626    7533 retry.go:31] will retry after 314.873131ms: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:53.587839    7533 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:48:53.640285    7533 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:48:53.640329    7533 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:53.640339    7533 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:48:53.640365    7533 retry.go:31] will retry after 738.780973ms: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:54.380253    7533 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:48:54.430830    7533 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:48:54.430872    7533 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:54.430883    7533 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:48:54.430914    7533 retry.go:31] will retry after 1.579687573s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:56.010825    7533 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:48:56.061374    7533 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:48:56.061422    7533 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:56.061433    7533 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:48:56.061458    7533 retry.go:31] will retry after 1.158279998s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:57.220657    7533 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:48:57.272966    7533 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:48:57.273009    7533 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:48:57.273017    7533 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:48:57.273042    7533 retry.go:31] will retry after 2.809473981s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:49:00.084867    7533 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:49:00.136996    7533 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:49:00.137044    7533 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:49:00.137056    7533 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:49:00.137086    7533 retry.go:31] will retry after 4.59485115s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:49:04.732339    7533 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:49:04.785957    7533 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:49:04.786010    7533 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:49:04.786021    7533 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:49:04.786039    7533 retry.go:31] will retry after 6.588404854s: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:49:11.375064    7533 cli_runner.go:164] Run: docker container inspect multinode-636000 --format={{.State.Status}}
	W0229 10:49:11.425955    7533 cli_runner.go:211] docker container inspect multinode-636000 --format={{.State.Status}} returned with exit code 1
	I0229 10:49:11.425998    7533 oci.go:662] temporary error verifying shutdown: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	I0229 10:49:11.426027    7533 oci.go:664] temporary error: container multinode-636000 status is  but expect it to be exited
	I0229 10:49:11.426057    7533 oci.go:88] couldn't shut down multinode-636000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000
	 
	I0229 10:49:11.426125    7533 cli_runner.go:164] Run: docker rm -f -v multinode-636000
	I0229 10:49:11.475778    7533 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-636000
	W0229 10:49:11.525144    7533 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-636000 returned with exit code 1
	I0229 10:49:11.525244    7533 cli_runner.go:164] Run: docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 10:49:11.574336    7533 cli_runner.go:164] Run: docker network rm multinode-636000
	I0229 10:49:11.695033    7533 fix.go:114] Sleeping 1 second for extra luck!
	I0229 10:49:12.696110    7533 start.go:125] createHost starting for "" (driver="docker")
	I0229 10:49:12.720223    7533 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0229 10:49:12.720385    7533 start.go:159] libmachine.API.Create for "multinode-636000" (driver="docker")
	I0229 10:49:12.720448    7533 client.go:168] LocalClient.Create starting
	I0229 10:49:12.720634    7533 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/ca.pem
	I0229 10:49:12.720732    7533 main.go:141] libmachine: Decoding PEM data...
	I0229 10:49:12.720762    7533 main.go:141] libmachine: Parsing certificate...
	I0229 10:49:12.720867    7533 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18259-932/.minikube/certs/cert.pem
	I0229 10:49:12.720940    7533 main.go:141] libmachine: Decoding PEM data...
	I0229 10:49:12.720957    7533 main.go:141] libmachine: Parsing certificate...
	I0229 10:49:12.741693    7533 cli_runner.go:164] Run: docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0229 10:49:12.794981    7533 cli_runner.go:211] docker network inspect multinode-636000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0229 10:49:12.795066    7533 network_create.go:281] running [docker network inspect multinode-636000] to gather additional debugging logs...
	I0229 10:49:12.795084    7533 cli_runner.go:164] Run: docker network inspect multinode-636000
	W0229 10:49:12.844676    7533 cli_runner.go:211] docker network inspect multinode-636000 returned with exit code 1
	I0229 10:49:12.844703    7533 network_create.go:284] error running [docker network inspect multinode-636000]: docker network inspect multinode-636000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-636000 not found
	I0229 10:49:12.844715    7533 network_create.go:286] output of [docker network inspect multinode-636000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-636000 not found
	
	** /stderr **
	I0229 10:49:12.844845    7533 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0229 10:49:12.895819    7533 network.go:210] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 10:49:12.896204    7533 network.go:207] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00217f9a0}
	I0229 10:49:12.896226    7533 network_create.go:124] attempt to create docker network multinode-636000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0229 10:49:12.896294    7533 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-636000 multinode-636000
	I0229 10:49:12.982403    7533 network_create.go:108] docker network multinode-636000 192.168.58.0/24 created
	I0229 10:49:12.982437    7533 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-636000" container
	I0229 10:49:12.982551    7533 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0229 10:49:13.033279    7533 cli_runner.go:164] Run: docker volume create multinode-636000 --label name.minikube.sigs.k8s.io=multinode-636000 --label created_by.minikube.sigs.k8s.io=true
	I0229 10:49:13.082990    7533 oci.go:103] Successfully created a docker volume multinode-636000
	I0229 10:49:13.083107    7533 cli_runner.go:164] Run: docker run --rm --name multinode-636000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-636000 --entrypoint /usr/bin/test -v multinode-636000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0229 10:49:13.383777    7533 oci.go:107] Successfully prepared a docker volume multinode-636000
	I0229 10:49:13.383818    7533 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 10:49:13.383830    7533 kic.go:194] Starting extracting preloaded images to volume ...
	I0229 10:49:13.383921    7533 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-636000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-636000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-636000
helpers_test.go:235: (dbg) docker inspect multinode-636000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-636000",
	        "Id": "de2efc1ce6d5203e5951535ab9af06bfe8c5a51e38a38277ffd1be84d78ed90e",
	        "Created": "2024-02-29T18:49:12.943190955Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-636000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-636000 -n multinode-636000: exit status 7 (113.788922ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:51:02.542835    7710 status.go:249] status error: host: state: unknown state "multinode-636000": docker container inspect multinode-636000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-636000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-636000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (131.55s)

                                                
                                    
x
+
TestScheduledStopUnix (300.9s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-575000 --memory=2048 --driver=docker 
E0229 10:54:19.055847    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:54:50.678467    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 10:55:42.106902    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-575000 --memory=2048 --driver=docker : signal: killed (5m0.003335438s)

                                                
                                                
-- stdout --
	* [scheduled-stop-575000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-575000 in cluster scheduled-stop-575000
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-575000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-575000 in cluster scheduled-stop-575000
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-02-29 10:58:48.626756 -0800 PST m=+4912.834188347
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-575000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-575000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-575000",
	        "Id": "27625475352d15d8282402a6921c49caec931d9e4569f9f714191b40c11563a6",
	        "Created": "2024-02-29T18:53:49.851006077Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-575000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-575000 -n scheduled-stop-575000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-575000 -n scheduled-stop-575000: exit status 7 (113.673102ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 10:58:48.791813    8450 status.go:249] status error: host: state: unknown state "scheduled-stop-575000": docker container inspect scheduled-stop-575000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-575000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-575000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-575000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-575000
--- FAIL: TestScheduledStopUnix (300.90s)

                                                
                                    
x
+
TestSkaffold (300.91s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe3024028703 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-370000 --memory=2600 --driver=docker 
E0229 10:59:19.051662    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 10:59:50.672131    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 11:01:13.724784    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-370000 --memory=2600 --driver=docker : signal: killed (4m53.928302298s)

                                                
                                                
-- stdout --
	* [skaffold-370000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-370000 in cluster skaffold-370000
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-370000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-370000 in cluster skaffold-370000
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:626: *** TestSkaffold FAILED at 2024-02-29 11:03:49.497634 -0800 PST m=+5213.740957695
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-370000
helpers_test.go:235: (dbg) docker inspect skaffold-370000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-370000",
	        "Id": "43d318943590ea4c898261e1610c6051edb3841d1d560c38f8d87d7aa69c46ec",
	        "Created": "2024-02-29T18:58:56.739960546Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-370000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-370000 -n skaffold-370000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-370000 -n skaffold-370000: exit status 7 (114.411711ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 11:03:49.666613    8728 status.go:249] status error: host: state: unknown state "skaffold-370000": docker container inspect skaffold-370000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-370000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-370000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-370000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-370000
--- FAIL: TestSkaffold (300.91s)

                                                
                                    
x
+
TestInsufficientStorage (300.74s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-415000 --memory=2048 --output=json --wait=true --driver=docker 
E0229 11:04:19.013799    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 11:04:50.635017    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-415000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.004171539s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"90558752-ddd6-48f7-a305-995566f775f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-415000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"29d99249-cf9c-4dd1-b845-ef45a7b30f55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18259"}}
	{"specversion":"1.0","id":"64634bf5-bde5-4067-ae2b-a50c49d3adb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig"}}
	{"specversion":"1.0","id":"01ff0f56-3445-4fbf-9603-d4130e2b2749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"64fc9c09-b20c-4351-a963-95b3830d8879","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"039e1bdf-12a8-4746-9166-cbf8f015e4e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube"}}
	{"specversion":"1.0","id":"853a5c31-b356-4ae1-bc09-2ea4136cc7b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"195f87cd-0ccd-42fd-aa18-8248550eb239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c687661b-abfb-43a6-8488-68e76df8d31d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5c83d0ea-c03a-4474-8d38-2be1595acbce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"04255b66-2658-4052-bdf4-717efc8cfce4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"eb1e357f-dd61-4f4b-b16c-6f316cd117b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-415000 in cluster insufficient-storage-415000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fffaeec9-f252-4b41-9695-f32c694e292c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708944392-18244 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cbb39e68-13ac-409b-9c0a-25e19d60304b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-415000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-415000 --output=json --layout=cluster: context deadline exceeded (656ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-415000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-415000
--- FAIL: TestInsufficientStorage (300.74s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (7200.692s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.1224523341 start -p running-upgrade-175000 --memory=2200 --vm-driver=docker 
E0229 11:09:19.004050    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 11:09:50.625698    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 11:12:22.051173    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 11:14:18.993882    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 11:14:50.614434    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 11:17:53.665292    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 11:19:18.984514    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 11:19:50.604497    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.1224523341 start -p running-upgrade-175000 --memory=2200 --vm-driver=docker : exit status 52 (14m24.57746329s)

                                                
                                                
-- stdout --
	* [running-upgrade-175000] minikube v1.26.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1125250523
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node running-upgrade-175000 in cluster running-upgrade-175000
	* Pulling base image ...
	* minikube 1.32.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.32.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Downloading Kubernetes v1.24.1 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "running-upgrade-175000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 27.24 KiB / 386.00 MiB [>____] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 27.24 KiB / 386.00 MiB  0.01% 45.42 KiB p/s     > gcr.io/k8s-minikube/kicbase: 27.24 KiB / 386.00 MiB  0.01% 45.42 KiB p/s     > gcr.io/k8s-minikube/kicbase: 90.78 KiB / 386.00 MiB  0.02% 45.42 KiB p/s     > gcr.io/k8s-minikube/kicbase: 234.78 KiB / 386.00 MiB  0.06% 64.81 KiB p/s    > gcr.io/k8s-minikube/kicbase: 522.78 KiB / 386.00 MiB  0.13% 64.81 KiB p/s    > gcr.io/k8s-minikube/kicbase: 1.35 MiB / 386.00 MiB  0.35% 64.81 KiB p/s E    > gcr.io/k8s-minikube/kicbase: 3.18 MiB / 386.00 MiB  0.82% 385
.92 KiB p/s     > gcr.io/k8s-minikube/kicbase: 6.93 MiB / 386.00 MiB  1.80% 385.92 KiB p/s     > gcr.io/k8s-minikube/kicbase: 13.03 MiB / 386.00 MiB  3.37% 385.92 KiB p/s    > gcr.io/k8s-minikube/kicbase: 18.09 MiB / 386.00 MiB  4.69% 1.96 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 24.43 MiB / 386.00 MiB  6.33% 1.96 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 1.96 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 2.82 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 2.82 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 2.82 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.06% 2.63 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.63 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.63 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.46 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07%
2.46 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.46 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.31 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.31 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.27 MiB / 386.00 MiB  7.07% 2.31 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 2.16 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 2.16 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 2.16 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 2.02 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 2.02 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.28 MiB / 386.00 MiB  7.07% 2.02 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.33 MiB / 386.00 MiB  7.08% 1.89 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.36 MiB / 386.00 MiB  7.09% 1.89 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 27.63 MiB / 386.00 MiB  7.
16% 1.89 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 28.66 MiB / 386.00 MiB  7.42% 1.91 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 32.86 MiB / 386.00 MiB  8.51% 1.91 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 38.35 MiB / 386.00 MiB  9.93% 1.91 MiB p/s E    > gcr.io/k8s-minikube/kicbase: 43.86 MiB / 386.00 MiB  11.36% 3.43 MiB p/s     > gcr.io/k8s-minikube/kicbase: 48.00 MiB / 386.00 MiB  12.44% 3.43 MiB p/s     > gcr.io/k8s-minikube/kicbase: 52.09 MiB / 386.00 MiB  13.50% 3.43 MiB p/s     > gcr.io/k8s-minikube/kicbase: 52.09 MiB / 386.00 MiB  13.50% 4.09 MiB p/s     > gcr.io/k8s-minikube/kicbase: 52.09 MiB / 386.00 MiB  13.50% 4.09 MiB p/s     > gcr.io/k8s-minikube/kicbase: 52.14 MiB / 386.00 MiB  13.51% 4.09 MiB p/s     > gcr.io/k8s-minikube/kicbase: 52.24 MiB / 386.00 MiB  13.53% 3.84 MiB p/s     > gcr.io/k8s-minikube/kicbase: 52.63 MiB / 386.00 MiB  13.63% 3.84 MiB p/s     > gcr.io/k8s-minikube/kicbase: 54.19 MiB / 386.00 MiB  14.04% 3.84 MiB p/s     > gcr.io/k8s-minikube/kicbase: 59.31 MiB / 386.00 MiB
15.37% 4.35 MiB p/s     > gcr.io/k8s-minikube/kicbase: 65.12 MiB / 386.00 MiB  16.87% 4.35 MiB p/s     > gcr.io/k8s-minikube/kicbase: 70.74 MiB / 386.00 MiB  18.33% 4.35 MiB p/s     > gcr.io/k8s-minikube/kicbase: 71.66 MiB / 386.00 MiB  18.57% 5.40 MiB p/s     > gcr.io/k8s-minikube/kicbase: 71.66 MiB / 386.00 MiB  18.57% 5.40 MiB p/s     > gcr.io/k8s-minikube/kicbase: 71.66 MiB / 386.00 MiB  18.57% 5.40 MiB p/s     > gcr.io/k8s-minikube/kicbase: 71.71 MiB / 386.00 MiB  18.58% 5.06 MiB p/s     > gcr.io/k8s-minikube/kicbase: 71.85 MiB / 386.00 MiB  18.61% 5.06 MiB p/s     > gcr.io/k8s-minikube/kicbase: 72.37 MiB / 386.00 MiB  18.75% 5.06 MiB p/s     > gcr.io/k8s-minikube/kicbase: 74.57 MiB / 386.00 MiB  19.32% 5.04 MiB p/s     > gcr.io/k8s-minikube/kicbase: 80.16 MiB / 386.00 MiB  20.77% 5.04 MiB p/s     > gcr.io/k8s-minikube/kicbase: 86.02 MiB / 386.00 MiB  22.29% 5.04 MiB p/s     > gcr.io/k8s-minikube/kicbase: 91.05 MiB / 386.00 MiB  23.59% 6.49 MiB p/s     > gcr.io/k8s-minikube/kicbase: 93.90 MiB / 386.00 M
iB  24.33% 6.49 MiB p/s     > gcr.io/k8s-minikube/kicbase: 93.90 MiB / 386.00 MiB  24.33% 6.49 MiB p/s     > gcr.io/k8s-minikube/kicbase: 93.90 MiB / 386.00 MiB  24.33% 6.37 MiB p/s     > gcr.io/k8s-minikube/kicbase: 93.98 MiB / 386.00 MiB  24.35% 6.37 MiB p/s     > gcr.io/k8s-minikube/kicbase: 94.15 MiB / 386.00 MiB  24.39% 6.37 MiB p/s     > gcr.io/k8s-minikube/kicbase: 94.82 MiB / 386.00 MiB  24.57% 6.06 MiB p/s     > gcr.io/k8s-minikube/kicbase: 97.67 MiB / 386.00 MiB  25.30% 6.06 MiB p/s     > gcr.io/k8s-minikube/kicbase: 103.65 MiB / 386.00 MiB  26.85% 6.06 MiB p/s    > gcr.io/k8s-minikube/kicbase: 109.01 MiB / 386.00 MiB  28.24% 7.20 MiB p/s    > gcr.io/k8s-minikube/kicbase: 114.73 MiB / 386.00 MiB  29.72% 7.20 MiB p/s    > gcr.io/k8s-minikube/kicbase: 117.98 MiB / 386.00 MiB  30.56% 7.20 MiB p/s    > gcr.io/k8s-minikube/kicbase: 117.98 MiB / 386.00 MiB  30.56% 7.70 MiB p/s    > gcr.io/k8s-minikube/kicbase: 117.98 MiB / 386.00 MiB  30.56% 7.70 MiB p/s    > gcr.io/k8s-minikube/kicbase: 118.03 MiB / 386.
00 MiB  30.58% 7.70 MiB p/s    > gcr.io/k8s-minikube/kicbase: 118.15 MiB / 386.00 MiB  30.61% 7.22 MiB p/s    > gcr.io/k8s-minikube/kicbase: 118.57 MiB / 386.00 MiB  30.72% 7.22 MiB p/s    > gcr.io/k8s-minikube/kicbase: 120.26 MiB / 386.00 MiB  31.16% 7.22 MiB p/s    > gcr.io/k8s-minikube/kicbase: 125.53 MiB / 386.00 MiB  32.52% 7.55 MiB p/s    > gcr.io/k8s-minikube/kicbase: 131.43 MiB / 386.00 MiB  34.05% 7.55 MiB p/s    > gcr.io/k8s-minikube/kicbase: 136.86 MiB / 386.00 MiB  35.45% 7.55 MiB p/s    > gcr.io/k8s-minikube/kicbase: 142.34 MiB / 386.00 MiB  36.88% 8.87 MiB p/s    > gcr.io/k8s-minikube/kicbase: 148.38 MiB / 386.00 MiB  38.44% 8.87 MiB p/s    > gcr.io/k8s-minikube/kicbase: 153.81 MiB / 386.00 MiB  39.85% 8.87 MiB p/s    > gcr.io/k8s-minikube/kicbase: 159.29 MiB / 386.00 MiB  41.27% 10.12 MiB p/    > gcr.io/k8s-minikube/kicbase: 164.71 MiB / 386.00 MiB  42.67% 10.12 MiB p/    > gcr.io/k8s-minikube/kicbase: 170.93 MiB / 386.00 MiB  44.28% 10.12 MiB p/    > gcr.io/k8s-minikube/kicbase: 176.29 MiB / 3
86.00 MiB  45.67% 11.29 MiB p/    > gcr.io/k8s-minikube/kicbase: 181.81 MiB / 386.00 MiB  47.10% 11.29 MiB p/    > gcr.io/k8s-minikube/kicbase: 187.79 MiB / 386.00 MiB  48.65% 11.29 MiB p/    > gcr.io/k8s-minikube/kicbase: 191.98 MiB / 386.00 MiB  49.73% 12.25 MiB p/    > gcr.io/k8s-minikube/kicbase: 198.37 MiB / 386.00 MiB  51.39% 12.25 MiB p/    > gcr.io/k8s-minikube/kicbase: 203.73 MiB / 386.00 MiB  52.78% 12.25 MiB p/    > gcr.io/k8s-minikube/kicbase: 209.88 MiB / 386.00 MiB  54.37% 13.39 MiB p/    > gcr.io/k8s-minikube/kicbase: 215.53 MiB / 386.00 MiB  55.84% 13.39 MiB p/    > gcr.io/k8s-minikube/kicbase: 220.93 MiB / 386.00 MiB  57.23% 13.39 MiB p/    > gcr.io/k8s-minikube/kicbase: 227.27 MiB / 386.00 MiB  58.88% 14.39 MiB p/    > gcr.io/k8s-minikube/kicbase: 231.56 MiB / 386.00 MiB  59.99% 14.39 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.41 MiB / 386.00 MiB  60.99% 14.39 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.41 MiB / 386.00 MiB  60.99% 14.34 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.41 MiB
/ 386.00 MiB  60.99% 14.34 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.44 MiB / 386.00 MiB  60.99% 14.34 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.48 MiB / 386.00 MiB  61.01% 13.42 MiB p/    > gcr.io/k8s-minikube/kicbase: 235.75 MiB / 386.00 MiB  61.07% 13.42 MiB p/    > gcr.io/k8s-minikube/kicbase: 236.77 MiB / 386.00 MiB  61.34% 13.42 MiB p/    > gcr.io/k8s-minikube/kicbase: 240.92 MiB / 386.00 MiB  62.41% 13.14 MiB p/    > gcr.io/k8s-minikube/kicbase: 246.34 MiB / 386.00 MiB  63.82% 13.14 MiB p/    > gcr.io/k8s-minikube/kicbase: 252.03 MiB / 386.00 MiB  65.29% 13.14 MiB p/    > gcr.io/k8s-minikube/kicbase: 257.84 MiB / 386.00 MiB  66.80% 14.11 MiB p/    > gcr.io/k8s-minikube/kicbase: 263.27 MiB / 386.00 MiB  68.20% 14.11 MiB p/    > gcr.io/k8s-minikube/kicbase: 268.63 MiB / 386.00 MiB  69.59% 14.11 MiB p/    > gcr.io/k8s-minikube/kicbase: 274.58 MiB / 386.00 MiB  71.13% 15.00 MiB p/    > gcr.io/k8s-minikube/kicbase: 280.20 MiB / 386.00 MiB  72.59% 15.00 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.29 M
iB / 386.00 MiB  72.87% 15.00 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.29 MiB / 386.00 MiB  72.87% 14.75 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.29 MiB / 386.00 MiB  72.87% 14.75 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.30 MiB / 386.00 MiB  72.88% 14.75 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.37 MiB / 386.00 MiB  72.89% 13.81 MiB p/    > gcr.io/k8s-minikube/kicbase: 281.55 MiB / 386.00 MiB  72.94% 13.81 MiB p/    > gcr.io/k8s-minikube/kicbase: 282.24 MiB / 386.00 MiB  73.12% 13.81 MiB p/    > gcr.io/k8s-minikube/kicbase: 284.93 MiB / 386.00 MiB  73.82% 13.30 MiB p/    > gcr.io/k8s-minikube/kicbase: 290.80 MiB / 386.00 MiB  75.34% 13.30 MiB p/    > gcr.io/k8s-minikube/kicbase: 296.32 MiB / 386.00 MiB  76.77% 13.30 MiB p/    > gcr.io/k8s-minikube/kicbase: 302.01 MiB / 386.00 MiB  78.24% 14.28 MiB p/    > gcr.io/k8s-minikube/kicbase: 307.80 MiB / 386.00 MiB  79.74% 14.28 MiB p/    > gcr.io/k8s-minikube/kicbase: 313.23 MiB / 386.00 MiB  81.15% 14.28 MiB p/    > gcr.io/k8s-minikube/kicbase: 319.1
5 MiB / 386.00 MiB  82.68% 15.20 MiB p/    > gcr.io/k8s-minikube/kicbase: 324.48 MiB / 386.00 MiB  84.06% 15.20 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.34 MiB / 386.00 MiB  84.28% 15.20 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.34 MiB / 386.00 MiB  84.28% 14.89 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.34 MiB / 386.00 MiB  84.28% 14.89 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.42 MiB / 386.00 MiB  84.31% 14.89 MiB p/    > gcr.io/k8s-minikube/kicbase: 325.61 MiB / 386.00 MiB  84.35% 13.96 MiB p/    > gcr.io/k8s-minikube/kicbase: 326.32 MiB / 386.00 MiB  84.54% 13.96 MiB p/    > gcr.io/k8s-minikube/kicbase: 329.45 MiB / 386.00 MiB  85.35% 13.96 MiB p/    > gcr.io/k8s-minikube/kicbase: 335.26 MiB / 386.00 MiB  86.85% 14.09 MiB p/    > gcr.io/k8s-minikube/kicbase: 340.70 MiB / 386.00 MiB  88.26% 14.09 MiB p/    > gcr.io/k8s-minikube/kicbase: 346.34 MiB / 386.00 MiB  89.73% 14.09 MiB p/    > gcr.io/k8s-minikube/kicbase: 351.59 MiB / 386.00 MiB  91.09% 14.94 MiB p/    > gcr.io/k8s-minikube/kicbase: 35
4.00 MiB / 386.00 MiB  91.71% 14.94 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.15 MiB / 386.00 MiB  92.79% 14.94 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.56 MiB / 386.00 MiB  92.89% 14.73 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.56 MiB / 386.00 MiB  92.89% 14.73 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.56 MiB / 386.00 MiB  92.89% 14.73 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.64 MiB / 386.00 MiB  92.91% 13.78 MiB p/    > gcr.io/k8s-minikube/kicbase: 358.84 MiB / 386.00 MiB  92.96% 13.78 MiB p/    > gcr.io/k8s-minikube/kicbase: 359.59 MiB / 386.00 MiB  93.16% 13.78 MiB p/    > gcr.io/k8s-minikube/kicbase: 362.56 MiB / 386.00 MiB  93.93% 13.32 MiB p/    > gcr.io/k8s-minikube/kicbase: 368.60 MiB / 386.00 MiB  95.49% 13.32 MiB p/    > gcr.io/k8s-minikube/kicbase: 373.89 MiB / 386.00 MiB  96.86% 13.32 MiB p/    > gcr.io/k8s-minikube/kicbase: 379.64 MiB / 386.00 MiB  98.35% 14.30 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.33 MiB / 386.00 MiB  99.83% 14.30 MiB p/    > gcr.io/k8s-minikube/kicbase:
385.96 MiB / 386.00 MiB  99.99% 14.30 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 14.05 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 14.05 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 14.05 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 13.15 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 13.15 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 13.15 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 12.30 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 12.30 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 12.30 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 11.50 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 11.50 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 11.50 MiB p/    > gcr.io/k8s-minikube/kicba
se: 385.96 MiB / 386.00 MiB  99.99% 10.76 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 10.76 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.96 MiB / 386.00 MiB  99.99% 10.76 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 10.07 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 10.07 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 10.07 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 9.42 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 9.42 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 9.42 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 8.81 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 8.81 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 8.81 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 8.24 MiB p/s    > gcr.io/k8s-minikube/ki
cbase: 385.97 MiB / 386.00 MiB  99.99% 8.24 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 8.24 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 7.71 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 7.71 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 7.71 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 7.21 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 7.21 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 7.21 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 6.75 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 6.75 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.97 MiB / 386.00 MiB  99.99% 6.75 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 6.31 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 6.31 MiB p/s    > gcr.io/k8s-minikube
/kicbase: 385.98 MiB / 386.00 MiB  99.99% 6.31 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.91 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.91 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.91 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.53 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.98 MiB / 386.00 MiB  99.99% 5.53 MiB p/s    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 5.53 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 5.17 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 5.17 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 5.17 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.84 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.84 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.84 MiB p/    > gcr.io/k8s-minik
ube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.52 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.52 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.52 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.23 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.23 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 4.23 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.96 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.96 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.96 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.70 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.70 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.70 MiB p/    > gcr.io/k8s-minikube/kicbase: 385.99 MiB / 386.00 MiB  100.00% 3.47 MiB p/    > gcr.io/k8s-mi
nikube/kicbase: 386.00 MiB / 386.00 MiB  100.00% 8.80 MiB p/    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s
-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/
k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.
io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > g
cr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?
> gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?
> gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s
?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ?
p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?%
? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________]
?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [_________________________
__] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [______________________
_____] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________
________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [________________
___________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [_____________
______________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [__________
_________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [___________________________] ?% ? p/s ?    > gcr.io/k8s-minikube/kicbase: 0 B [_________________________] ?% ? p/s 42s! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p running-upgrade-175000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Related issue: https://github.com/kubernetes/minikube/issues/7072

                                                
                                                
** /stderr **
version_upgrade_test.go:120: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.1224523341 start -p running-upgrade-175000 --memory=2200 --vm-driver=docker 
E0229 11:24:18.974009    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 11:24:50.595242    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
E0229 11:29:02.018184    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 11:29:18.964083    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 11:29:50.584494    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.1224523341 start -p running-upgrade-175000 --memory=2200 --vm-driver=docker : exit status 52 (12m53.247166413s)

                                                
                                                
-- stdout --
	* [running-upgrade-175000] minikube v1.26.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1997544283
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-175000 in cluster running-upgrade-175000
	* Pulling base image ...
	* docker "running-upgrade-175000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "running-upgrade-175000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p running-upgrade-175000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Related issue: https://github.com/kubernetes/minikube/issues/7072

                                                
                                                
** /stderr **
version_upgrade_test.go:120: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.26.0.1224523341 start -p running-upgrade-175000 --memory=2200 --vm-driver=docker 
panic: test timed out after 2h0m0s
running tests:
	TestMissingContainerUpgrade (15m30s)
	TestRunningBinaryUpgrade (28m5s)

                                                
                                                
goroutine 1918 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 15 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000235d40, 0xc001397bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000900300, {0x1241dce0, 0x2a, 0x2a}, {0xe0f4ba5?, 0xfb76acb?, 0x124403e0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00098c640)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00098c640)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0006a5c80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 215 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 214
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 210 [chan receive, 114 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a6cf40, 0xc00069e060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 203
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 569 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008a8000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008a8000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc0008a8000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc0008a8000, 0x1110fa58)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 13 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 12
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 209 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000a2a900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 203
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 571 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008a84e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008a84e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc0008a84e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc0008a84e0, 0x1110fa68)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 213 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000a6cf10, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x10c2c0e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000a2a7e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a6cf40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00021df70, {0x1111bb40, 0xc000608a20}, 0x1, 0xc00069e060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00021df70, 0x3b9aca00, 0x0, 0x1, 0xc00069e060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 210
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 214 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1113e540, 0xc00069e060}, 0xc0023f1f50, 0xc000918f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1113e540, 0xc00069e060}, 0x0?, 0xc0023f1f50, 0xc0023f1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1113e540?, 0xc00069e060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 210
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 570 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008a8340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008a8340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc0008a8340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc0008a8340, 0x1110fa50)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1864 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000889ba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000889ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc000889ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc000889ba0, 0x1110fb88)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 573 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008a89c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008a89c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc0008a89c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc0008a89c0, 0x1110fa90)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1770 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000888000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000888000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000888000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:47 +0x39
testing.tRunner(0xc000888000, 0x1110fb38)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1843 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000889520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000889520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc000889520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc000889520, 0x1110fb80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 892 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 891
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1771 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008881a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008881a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0008881a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc0008881a0, 0x1110fb40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1866 [syscall, 3 minutes]:
syscall.syscall6(0xc0022a9f80?, 0x1000000000010?, 0x1000000004c?, 0x59bf6708?, 0x90?, 0x12d22108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0020c9758?, 0xe035165?, 0x90?, 0x1107fa80?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xe165e85?, 0xc0020c978c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002548090)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002356160)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002356160)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002850000, 0xc002356160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestMissingContainerUpgrade.func1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:309 +0x66
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc0020c9ba0?, {0x11127c30, 0xc0009645a0}, 0x11110a88, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x10aae198?, {0x11127c30?, 0xc0009645a0?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc0020c9d10, 0x3b9aca00, 0x1a3185c5000, {0xc0020c9c70?, 0x10c2c0e0?, 0xb7c?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xeb
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc002850000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:314 +0x54e
testing.tRunner(0xc002850000, 0x1110fb20)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1772 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000888d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000888d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc000888d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc000888d00, 0x1110fb50)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1916 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x597d50b0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0006fecc0?, 0xc0007cd200?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0006fecc0, {0xc0007cd200, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0026d4048, {0xc0007cd200?, 0xc0023f1df0?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0022a8060, {0x1111a558, 0xc002228098})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x1111a698, 0xc0022a8060}, {0x1111a558, 0xc002228098}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0023f1e78?, {0x1111a698, 0xc0022a8060})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x123dfdc0?, {0x1111a698?, 0xc0022a8060?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x1111a698, 0xc0022a8060}, {0x1111a618, 0xc0026d4048}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00069ec01?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1866
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1904 [IO wait]:
internal/poll.runtime_pollWait(0x597d4dc8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0026ca2a0?, 0xc0022f4296?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0026ca2a0, {0xc0022f4296, 0x56a, 0x56a})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0022280a8, {0xc0022f4296?, 0xc000705500?, 0x217?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0022a80f0, {0x1111a558, 0xc0026d4058})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x1111a698, 0xc0022a80f0}, {0x1111a558, 0xc0026d4058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0023ee678?, {0x1111a698, 0xc0022a80f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x123dfdc0?, {0x1111a698?, 0xc0022a80f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x1111a698, 0xc0022a80f0}, {0x1111a618, 0xc0022280a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00069f920?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1863
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 572 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008a8680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008a8680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc0008a8680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:83 +0x92
testing.tRunner(0xc0008a8680, 0x1110fa98)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 576 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008a8ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008a8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperKitDriverInstallOrUpdate(0xc0008a8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:108 +0x39
testing.tRunner(0xc0008a8ea0, 0x1110fab0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 593 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0008a9040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0008a9040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperkitDriverSkipUpgrade(0xc0008a9040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:172 +0x2a
testing.tRunner(0xc0008a9040, 0x1110fab8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 873 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00279a5a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 799
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1863 [syscall]:
syscall.syscall6(0xc0022a9f80?, 0x1000000000010?, 0x1000000004c?, 0x59bf6708?, 0x90?, 0x12d225b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0023e1758?, 0xe035165?, 0x90?, 0x1107fa80?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xe165e85?, 0xc0023e178c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0025480c0)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0023562c0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0023562c0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0008891e0, 0xc0023562c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade.func1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:120 +0x36d
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc0023e1c38?, {0x11127c30, 0xc000964860}, 0x11110a88, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:88 +0x132
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0xfb13a1f?, {0x11127c30?, 0xc000964860?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:61 +0x5c
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.2.1/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc0023e1e08, 0x3b9aca00, 0x1a3185c5000, {0xc0023e1d10?, 0x10c2c0e0?, 0x6ba8d?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xeb
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0008891e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:125 +0x4e5
testing.tRunner(0xc0008891e0, 0x1110fb60)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 891 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1113e540, 0xc00069e060}, 0xc000112f50, 0xc002283f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1113e540, 0xc00069e060}, 0xe0?, 0xc000112f50, 0xc000112f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1113e540?, 0xc00069e060?}, 0xc000112fb0?, 0xe5b9298?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xe1aede5?, 0xc00227c000?, 0xc0027e3ce0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 874
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1865 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc00080eff0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000889d40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000889d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc000889d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc000889d40, 0x1110fb08)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1915 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x597d4cd0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0006fec00?, 0xc0021f5283?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0006fec00, {0xc0021f5283, 0x57d, 0x57d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0026d4030, {0xc0021f5283?, 0xc00020f340?, 0x204?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0022a8030, {0x1111a558, 0xc002228088})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x1111a698, 0xc0022a8030}, {0x1111a558, 0xc002228088}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0023ede78?, {0x1111a698, 0xc0022a8030})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x123dfdc0?, {0x1111a698?, 0xc0022a8030?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x1111a698, 0xc0022a8030}, {0x1111a618, 0xc0026d4030}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0027e3320?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1866
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 655 [IO wait, 110 minutes]:
internal/poll.runtime_pollWait(0x597d4fb8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002958380?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc002958380)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc002958380)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc002960a00)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc002960a00)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0006760f0, {0x11131de0, 0xc002960a00})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0006760f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0022309c0?, 0xc002230ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 652
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1921 [IO wait]:
internal/poll.runtime_pollWait(0x597d52a0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0026ca360?, 0xc0007cd400?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0026ca360, {0xc0007cd400, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0022280e8, {0xc0007cd400?, 0xc0023f1df0?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0022a8120, {0x1111a558, 0xc0026d4060})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x1111a698, 0xc0022a8120}, {0x1111a558, 0xc0026d4060}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0023f1e78?, {0x1111a698, 0xc0022a8120})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x123dfdc0?, {0x1111a698?, 0xc0022a8120?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x1111a698, 0xc0022a8120}, {0x1111a618, 0xc0022280e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00069ec01?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1863
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 874 [chan receive, 108 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000937080, 0xc00069e060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 799
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1279 [select, 106 minutes]:
net/http.(*persistConn).readLoop(0xc002381200)
	/usr/local/go/src/net/http/transport.go:2260 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1260
	/usr/local/go/src/net/http/transport.go:1798 +0x152f

                                                
                                                
goroutine 1278 [chan send, 106 minutes]:
os/exec.(*Cmd).watchCtx(0xc0022e2c60, 0xc002571200)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 786
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1035 [chan send, 108 minutes]:
os/exec.(*Cmd).watchCtx(0xc00256fb80, 0xc002570a20)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1034
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 890 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000937050, 0x2b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x10c2c0e0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00279a480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000937080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008e2ff0, {0x1111bb40, 0xc000609dd0}, 0x1, 0xc00069e060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008e2ff0, 0x3b9aca00, 0x0, 0x1, 0xc00069e060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 874
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1171 [chan send, 106 minutes]:
os/exec.(*Cmd).watchCtx(0xc0021deb00, 0xc0027e37a0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1170
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1216 [chan send, 106 minutes]:
os/exec.(*Cmd).watchCtx(0xc0023966e0, 0xc002352c60)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1215
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1917 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc002356160, 0xc00069e120)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1866
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1280 [select, 106 minutes]:
net/http.(*persistConn).writeLoop(0xc002381200)
	/usr/local/go/src/net/http/transport.go:2443 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1260
	/usr/local/go/src/net/http/transport.go:1799 +0x1585

                                                
                                                
goroutine 1922 [select]:
os/exec.(*Cmd).watchCtx(0xc0023562c0, 0xc00069e300)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1863
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                    

Test pass (153/195)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 21.52
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.34
9 TestDownloadOnly/v1.16.0/DeleteAll 0.66
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.38
12 TestDownloadOnly/v1.28.4/json-events 55.14
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.35
18 TestDownloadOnly/v1.28.4/DeleteAll 0.64
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.37
21 TestDownloadOnly/v1.29.0-rc.2/json-events 18.01
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.34
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.67
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.38
29 TestDownloadOnlyKic 1.93
30 TestBinaryMirror 1.63
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
36 TestAddons/Setup 339.52
40 TestAddons/parallel/InspektorGadget 10.84
41 TestAddons/parallel/MetricsServer 5.88
42 TestAddons/parallel/HelmTiller 9.86
44 TestAddons/parallel/CSI 49.29
45 TestAddons/parallel/Headlamp 12.53
46 TestAddons/parallel/CloudSpanner 6.7
47 TestAddons/parallel/LocalPath 53.99
48 TestAddons/parallel/NvidiaDevicePlugin 6.65
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.1
53 TestAddons/StoppedEnableDisable 11.74
64 TestErrorSpam/setup 22.25
65 TestErrorSpam/start 2.32
66 TestErrorSpam/status 1.28
67 TestErrorSpam/pause 1.72
68 TestErrorSpam/unpause 1.83
69 TestErrorSpam/stop 11.45
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 37.64
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 37.55
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 9.64
81 TestFunctional/serial/CacheCmd/cache/add_local 1.63
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
83 TestFunctional/serial/CacheCmd/cache/list 0.09
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.43
85 TestFunctional/serial/CacheCmd/cache/cache_reload 3.39
86 TestFunctional/serial/CacheCmd/cache/delete 0.18
87 TestFunctional/serial/MinikubeKubectlCmd 0.57
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.7
89 TestFunctional/serial/ExtraConfig 38.2
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 3.01
92 TestFunctional/serial/LogsFileCmd 3.31
93 TestFunctional/serial/InvalidService 4.65
95 TestFunctional/parallel/ConfigCmd 0.58
96 TestFunctional/parallel/DashboardCmd 13.61
97 TestFunctional/parallel/DryRun 1.38
98 TestFunctional/parallel/InternationalLanguage 0.67
99 TestFunctional/parallel/StatusCmd 1.28
104 TestFunctional/parallel/AddonsCmd 0.31
105 TestFunctional/parallel/PersistentVolumeClaim 28.68
107 TestFunctional/parallel/SSHCmd 0.88
108 TestFunctional/parallel/CpCmd 2.36
109 TestFunctional/parallel/MySQL 32.13
110 TestFunctional/parallel/FileSync 0.48
111 TestFunctional/parallel/CertSync 2.68
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
119 TestFunctional/parallel/License 1.51
120 TestFunctional/parallel/Version/short 0.11
121 TestFunctional/parallel/Version/components 0.69
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
126 TestFunctional/parallel/ImageCommands/ImageBuild 5.42
127 TestFunctional/parallel/ImageCommands/Setup 5.44
128 TestFunctional/parallel/DockerEnv/bash 1.88
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.28
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.34
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.31
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.9
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 9.87
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.48
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.38
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.32
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
140 TestFunctional/parallel/ProfileCmd/profile_list 0.58
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.17
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
153 TestFunctional/parallel/ServiceCmd/DeployApp 7.12
154 TestFunctional/parallel/MountCmd/any-port 12.81
155 TestFunctional/parallel/ServiceCmd/List 1.03
156 TestFunctional/parallel/ServiceCmd/JSONOutput 1.04
157 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
158 TestFunctional/parallel/MountCmd/specific-port 2.15
159 TestFunctional/parallel/MountCmd/VerifyCleanup 2.54
160 TestFunctional/parallel/ServiceCmd/Format 15.01
161 TestFunctional/parallel/ServiceCmd/URL 15
162 TestFunctional/delete_addon-resizer_images 0.14
163 TestFunctional/delete_my-image_image 0.05
164 TestFunctional/delete_minikube_cached_images 0.05
168 TestImageBuild/serial/Setup 22.05
169 TestImageBuild/serial/NormalBuild 4.73
170 TestImageBuild/serial/BuildWithBuildArg 1.18
171 TestImageBuild/serial/BuildWithDockerIgnore 1.04
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.05
182 TestJSONOutput/start/Command 46.09
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.63
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.62
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 10.83
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.88
207 TestKicCustomNetwork/create_custom_network 25.15
208 TestKicCustomNetwork/use_default_bridge_network 24.13
209 TestKicExistingNetwork 25.15
210 TestKicCustomSubnet 23.76
211 TestKicStaticIP 24.64
212 TestMainNoArgs 0.09
213 TestMinikubeProfile 52.82
216 TestMountStart/serial/StartWithMountFirst 7.61
217 TestMountStart/serial/VerifyMountFirst 0.39
218 TestMountStart/serial/StartWithMountSecond 8.22
219 TestMountStart/serial/VerifyMountSecond 0.38
220 TestMountStart/serial/DeleteFirst 2.06
221 TestMountStart/serial/VerifyMountPostDelete 0.38
222 TestMountStart/serial/Stop 1.56
223 TestMountStart/serial/RestartStopped 9.11
243 TestPreload 165.19
x
+
TestDownloadOnly/v1.16.0/json-events (21.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-403000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-403000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (21.518799163s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (21.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-403000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-403000: exit status 85 (342.377511ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-403000 | jenkins | v1.32.0 | 29 Feb 24 09:36 PST |          |
	|         | -p download-only-403000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 09:36:55
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 09:36:55.859481    1410 out.go:291] Setting OutFile to fd 1 ...
	I0229 09:36:55.859739    1410 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:36:55.859744    1410 out.go:304] Setting ErrFile to fd 2...
	I0229 09:36:55.859748    1410 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:36:55.859936    1410 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	W0229 09:36:55.860033    1410 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18259-932/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18259-932/.minikube/config/config.json: no such file or directory
	I0229 09:36:55.861732    1410 out.go:298] Setting JSON to true
	I0229 09:36:55.886932    1410 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":385,"bootTime":1709227830,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0229 09:36:55.887035    1410 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 09:36:55.909013    1410 out.go:97] [download-only-403000] minikube v1.32.0 on Darwin 14.3.1
	I0229 09:36:55.931097    1410 out.go:169] MINIKUBE_LOCATION=18259
	I0229 09:36:55.909248    1410 notify.go:220] Checking for updates...
	W0229 09:36:55.909231    1410 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball: no such file or directory
	I0229 09:36:55.974826    1410 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	I0229 09:36:55.995812    1410 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0229 09:36:56.037583    1410 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 09:36:56.058882    1410 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	W0229 09:36:56.103777    1410 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 09:36:56.104284    1410 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 09:36:56.161441    1410 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 09:36:56.161627    1410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 09:36:56.271781    1410 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:96 SystemTime:2024-02-29 17:36:56.259150345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:24 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 09:36:56.292914    1410 out.go:97] Using the docker driver based on user configuration
	I0229 09:36:56.292969    1410 start.go:299] selected driver: docker
	I0229 09:36:56.292983    1410 start.go:903] validating driver "docker" against <nil>
	I0229 09:36:56.293194    1410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 09:36:56.397785    1410 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:96 SystemTime:2024-02-29 17:36:56.388267123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:24 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 09:36:56.397979    1410 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 09:36:56.402565    1410 start_flags.go:394] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0229 09:36:56.402724    1410 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 09:36:56.424073    1410 out.go:169] Using Docker Desktop driver with root privileges
	I0229 09:36:56.445947    1410 cni.go:84] Creating CNI manager for ""
	I0229 09:36:56.446006    1410 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 09:36:56.446025    1410 start_flags.go:323] config:
	{Name:download-only-403000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-403000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 09:36:56.467760    1410 out.go:97] Starting control plane node download-only-403000 in cluster download-only-403000
	I0229 09:36:56.467823    1410 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 09:36:56.488727    1410 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0229 09:36:56.488762    1410 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 09:36:56.488805    1410 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 09:36:56.537979    1410 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0229 09:36:56.538205    1410 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0229 09:36:56.538342    1410 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0229 09:36:56.763452    1410 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 09:36:56.763480    1410 cache.go:56] Caching tarball of preloaded images
	I0229 09:36:56.763782    1410 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 09:36:56.785511    1410 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0229 09:36:56.785538    1410 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0229 09:36:57.330351    1410 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 09:37:14.494579    1410 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0229 09:37:14.494770    1410 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0229 09:37:15.009024    1410 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 09:37:15.009263    1410 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/download-only-403000/config.json ...
	I0229 09:37:15.009288    1410 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/download-only-403000/config.json: {Name:mk15bb75a0d6dad36eb9842a60c3150782c93cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 09:37:15.009590    1410 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 09:37:15.009882    1410 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/18259-932/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-403000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-403000
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (55.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-658000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-658000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (55.138687566s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (55.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-658000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-658000: exit status 85 (347.556845ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-403000 | jenkins | v1.32.0 | 29 Feb 24 09:36 PST |                     |
	|         | -p download-only-403000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 29 Feb 24 09:37 PST | 29 Feb 24 09:37 PST |
	| delete  | -p download-only-403000        | download-only-403000 | jenkins | v1.32.0 | 29 Feb 24 09:37 PST | 29 Feb 24 09:37 PST |
	| start   | -o=json --download-only        | download-only-658000 | jenkins | v1.32.0 | 29 Feb 24 09:37 PST |                     |
	|         | -p download-only-658000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 09:37:18
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 09:37:18.760558    1486 out.go:291] Setting OutFile to fd 1 ...
	I0229 09:37:18.761254    1486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:37:18.761278    1486 out.go:304] Setting ErrFile to fd 2...
	I0229 09:37:18.761285    1486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:37:18.761768    1486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 09:37:18.763449    1486 out.go:298] Setting JSON to true
	I0229 09:37:18.785464    1486 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":408,"bootTime":1709227830,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0229 09:37:18.785563    1486 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 09:37:18.806869    1486 out.go:97] [download-only-658000] minikube v1.32.0 on Darwin 14.3.1
	I0229 09:37:18.828800    1486 out.go:169] MINIKUBE_LOCATION=18259
	I0229 09:37:18.807098    1486 notify.go:220] Checking for updates...
	I0229 09:37:18.872700    1486 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	I0229 09:37:18.894824    1486 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0229 09:37:18.916736    1486 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 09:37:18.938655    1486 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	W0229 09:37:18.981360    1486 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 09:37:18.981735    1486 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 09:37:19.040120    1486 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 09:37:19.040238    1486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 09:37:19.146182    1486 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:96 SystemTime:2024-02-29 17:37:19.13024061 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:24 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=c
groupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev
Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) fo
r an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 09:37:19.167726    1486 out.go:97] Using the docker driver based on user configuration
	I0229 09:37:19.167771    1486 start.go:299] selected driver: docker
	I0229 09:37:19.167786    1486 start.go:903] validating driver "docker" against <nil>
	I0229 09:37:19.168026    1486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 09:37:19.275207    1486 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:96 SystemTime:2024-02-29 17:37:19.265626609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:24 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 09:37:19.275393    1486 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 09:37:19.278432    1486 start_flags.go:394] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0229 09:37:19.278728    1486 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 09:37:19.300457    1486 out.go:169] Using Docker Desktop driver with root privileges
	I0229 09:37:19.321336    1486 cni.go:84] Creating CNI manager for ""
	I0229 09:37:19.321376    1486 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 09:37:19.321395    1486 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 09:37:19.321412    1486 start_flags.go:323] config:
	{Name:download-only-658000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-658000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 09:37:19.343600    1486 out.go:97] Starting control plane node download-only-658000 in cluster download-only-658000
	I0229 09:37:19.343644    1486 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 09:37:19.365623    1486 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0229 09:37:19.365718    1486 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 09:37:19.365786    1486 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 09:37:19.417205    1486 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0229 09:37:19.417368    1486 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0229 09:37:19.417392    1486 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0229 09:37:19.417399    1486 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0229 09:37:19.417408    1486 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0229 09:37:19.621299    1486 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 09:37:19.621335    1486 cache.go:56] Caching tarball of preloaded images
	I0229 09:37:19.621677    1486 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 09:37:19.643265    1486 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0229 09:37:19.643292    1486 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0229 09:37:20.207344    1486 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 09:37:38.143746    1486 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0229 09:37:38.143930    1486 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0229 09:37:38.724338    1486 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 09:37:38.724573    1486 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/download-only-658000/config.json ...
	I0229 09:37:38.724599    1486 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/download-only-658000/config.json: {Name:mk8a523a4e2ce3d1ff8a3e6fa202476a46cd336c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 09:37:38.724894    1486 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 09:37:38.725096    1486 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18259-932/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-658000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-658000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (18.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-878000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-878000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker : (18.010307595s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (18.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-878000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-878000: exit status 85 (337.86424ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-403000 | jenkins | v1.32.0 | 29 Feb 24 09:36 PST |                     |
	|         | -p download-only-403000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 09:37 PST | 29 Feb 24 09:37 PST |
	| delete  | -p download-only-403000           | download-only-403000 | jenkins | v1.32.0 | 29 Feb 24 09:37 PST | 29 Feb 24 09:37 PST |
	| start   | -o=json --download-only           | download-only-658000 | jenkins | v1.32.0 | 29 Feb 24 09:37 PST |                     |
	|         | -p download-only-658000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 09:38 PST | 29 Feb 24 09:38 PST |
	| delete  | -p download-only-658000           | download-only-658000 | jenkins | v1.32.0 | 29 Feb 24 09:38 PST | 29 Feb 24 09:38 PST |
	| start   | -o=json --download-only           | download-only-878000 | jenkins | v1.32.0 | 29 Feb 24 09:38 PST |                     |
	|         | -p download-only-878000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 09:38:15
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 09:38:15.257542    1557 out.go:291] Setting OutFile to fd 1 ...
	I0229 09:38:15.258272    1557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:38:15.258280    1557 out.go:304] Setting ErrFile to fd 2...
	I0229 09:38:15.258286    1557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:38:15.258887    1557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 09:38:15.260338    1557 out.go:298] Setting JSON to true
	I0229 09:38:15.282341    1557 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":465,"bootTime":1709227830,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0229 09:38:15.282428    1557 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 09:38:15.304785    1557 out.go:97] [download-only-878000] minikube v1.32.0 on Darwin 14.3.1
	I0229 09:38:15.326196    1557 out.go:169] MINIKUBE_LOCATION=18259
	I0229 09:38:15.304998    1557 notify.go:220] Checking for updates...
	I0229 09:38:15.369535    1557 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	I0229 09:38:15.391476    1557 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0229 09:38:15.413628    1557 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 09:38:15.435544    1557 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	W0229 09:38:15.478381    1557 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 09:38:15.478937    1557 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 09:38:15.535725    1557 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 09:38:15.535858    1557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 09:38:15.639048    1557 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:97 SystemTime:2024-02-29 17:38:15.628491884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:25 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 09:38:15.660345    1557 out.go:97] Using the docker driver based on user configuration
	I0229 09:38:15.660394    1557 start.go:299] selected driver: docker
	I0229 09:38:15.660423    1557 start.go:903] validating driver "docker" against <nil>
	I0229 09:38:15.660688    1557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 09:38:15.768201    1557 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:97 SystemTime:2024-02-29 17:38:15.756438144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:25 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=
cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker De
v Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) f
or an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 09:38:15.768393    1557 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 09:38:15.771295    1557 start_flags.go:394] Using suggested 5877MB memory alloc based on sys=32768MB, container=5925MB
	I0229 09:38:15.771439    1557 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 09:38:15.792493    1557 out.go:169] Using Docker Desktop driver with root privileges
	I0229 09:38:15.813960    1557 cni.go:84] Creating CNI manager for ""
	I0229 09:38:15.814022    1557 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 09:38:15.814040    1557 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 09:38:15.814053    1557 start_flags.go:323] config:
	{Name:download-only-878000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:5877 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-878000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 09:38:15.835696    1557 out.go:97] Starting control plane node download-only-878000 in cluster download-only-878000
	I0229 09:38:15.835776    1557 cache.go:121] Beginning downloading kic base image for docker with docker
	I0229 09:38:15.857547    1557 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0229 09:38:15.857600    1557 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 09:38:15.857690    1557 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0229 09:38:15.908498    1557 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0229 09:38:15.908695    1557 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0229 09:38:15.908716    1557 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0229 09:38:15.908722    1557 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0229 09:38:15.908731    1557 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0229 09:38:16.123441    1557 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 09:38:16.123486    1557 cache.go:56] Caching tarball of preloaded images
	I0229 09:38:16.123869    1557 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 09:38:16.147841    1557 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0229 09:38:16.147870    1557 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0229 09:38:16.695196    1557 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> /Users/jenkins/minikube-integration/18259-932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-878000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-878000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.93s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-366000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-366000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-366000
--- PASS: TestDownloadOnlyKic (1.93s)

                                                
                                    
x
+
TestBinaryMirror (1.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-516000 --alsologtostderr --binary-mirror http://127.0.0.1:49353 --driver=docker 
aaa_download_only_test.go:314: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-516000 --alsologtostderr --binary-mirror http://127.0.0.1:49353 --driver=docker : (1.025789482s)
helpers_test.go:175: Cleaning up "binary-mirror-516000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-516000
--- PASS: TestBinaryMirror (1.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-551000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-551000: exit status 85 (194.320403ms)

                                                
                                                
-- stdout --
	* Profile "addons-551000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-551000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-551000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-551000: exit status 85 (215.445324ms)

                                                
                                                
-- stdout --
	* Profile "addons-551000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-551000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (339.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-551000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-551000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (5m39.524235162s)
--- PASS: TestAddons/Setup (339.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dp692" [fbe98449-d849-4603-a6c6-c8d45f98858d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006421724s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-551000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-551000: (5.828456558s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.754381ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-z6f7h" [bb5e9c07-f7df-4f7a-b351-bfcf604b86e7] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005523991s
addons_test.go:415: (dbg) Run:  kubectl --context addons-551000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-551000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.88s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.86s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.344748ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-xxxgg" [4f8ad9a4-46ea-43df-9203-0651ada52d67] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004403215s
addons_test.go:473: (dbg) Run:  kubectl --context addons-551000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-551000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.152040451s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-551000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 14.911729ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-551000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-551000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7e6ae19e-89f7-4532-8c5e-5751f243dc5d] Pending
helpers_test.go:344: "task-pv-pod" [7e6ae19e-89f7-4532-8c5e-5751f243dc5d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7e6ae19e-89f7-4532-8c5e-5751f243dc5d] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004904636s
addons_test.go:584: (dbg) Run:  kubectl --context addons-551000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-551000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-551000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-551000 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-551000 delete pod task-pv-pod: (1.226938907s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-551000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-551000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-551000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [165f7927-843c-4c46-86ae-5af6a952d9d5] Pending
helpers_test.go:344: "task-pv-pod-restore" [165f7927-843c-4c46-86ae-5af6a952d9d5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [165f7927-843c-4c46-86ae-5af6a952d9d5] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005121338s
addons_test.go:626: (dbg) Run:  kubectl --context addons-551000 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-551000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-551000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-551000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-551000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.888683672s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-551000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-darwin-amd64 -p addons-551000 addons disable volumesnapshots --alsologtostderr -v=1: (1.111350963s)
--- PASS: TestAddons/parallel/CSI (49.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-551000 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-551000 --alsologtostderr -v=1: (1.520250449s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-cpzpj" [69e7c3a8-e93e-4412-843a-dd456da2201e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-cpzpj" [69e7c3a8-e93e-4412-843a-dd456da2201e] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.007326056s
--- PASS: TestAddons/parallel/Headlamp (12.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-l79ml" [5e76ffab-36c4-44a4-8dda-c1ecfea329bf] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.006471188s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-551000
--- PASS: TestAddons/parallel/CloudSpanner (6.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.99s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-551000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-551000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-551000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cf6e4320-3d34-4553-b57e-f9c27a1ec9c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cf6e4320-3d34-4553-b57e-f9c27a1ec9c1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cf6e4320-3d34-4553-b57e-f9c27a1ec9c1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.006745929s
addons_test.go:891: (dbg) Run:  kubectl --context addons-551000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-551000 ssh "cat /opt/local-path-provisioner/pvc-5c05bd37-a6ec-460d-90e1-0e31235a1b81_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-551000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-551000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-551000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-darwin-amd64 -p addons-551000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.06899897s)
--- PASS: TestAddons/parallel/LocalPath (53.99s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xkfhx" [081f1f83-31be-49fc-904c-82f4befebc44] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006103057s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-551000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-6vr92" [c43b4fe5-3cd9-463c-b42f-cac2cd8e083e] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.007101656s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-551000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-551000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.74s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-551000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-551000: (11.005079325s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-551000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-551000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-551000
--- PASS: TestAddons/StoppedEnableDisable (11.74s)

                                                
                                    
x
+
TestErrorSpam/setup (22.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-703000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-703000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 --driver=docker : (22.252612144s)
--- PASS: TestErrorSpam/setup (22.25s)

                                                
                                    
x
+
TestErrorSpam/start (2.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 start --dry-run
--- PASS: TestErrorSpam/start (2.32s)

                                                
                                    
x
+
TestErrorSpam/status (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 status
--- PASS: TestErrorSpam/status (1.28s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (11.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 stop: (10.814547436s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-703000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-703000 stop
--- PASS: TestErrorSpam/stop (11.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18259-932/.minikube/files/etc/test/nested/copy/1408/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-081000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-081000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (37.634926923s)
--- PASS: TestFunctional/serial/StartWithProxy (37.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.55s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-081000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-081000 --alsologtostderr -v=8: (37.551963207s)
functional_test.go:659: soft start took 37.552487389s for "functional-081000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.55s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-081000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 cache add registry.k8s.io/pause:3.1: (3.572417386s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 cache add registry.k8s.io/pause:3.3: (3.547391732s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 cache add registry.k8s.io/pause:latest: (2.51912193s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-081000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3152475781/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 cache add minikube-local-cache-test:functional-081000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 cache add minikube-local-cache-test:functional-081000: (1.079106765s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 cache delete minikube-local-cache-test:functional-081000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-081000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-081000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (411.739266ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 cache reload: (2.142627613s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 kubectl -- --context functional-081000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.57s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-081000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.70s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-081000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0229 09:49:19.003706    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:49:19.009572    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:49:19.019901    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:49:19.042053    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:49:19.082385    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:49:19.162986    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:49:19.323151    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:49:19.644027    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:49:20.284244    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:49:21.565186    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:49:24.125997    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
E0229 09:49:29.246627    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-081000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.141068607s)
functional_test.go:757: restart took 38.141232531s for "functional-081000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-081000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 logs: (3.013623992s)
--- PASS: TestFunctional/serial/LogsCmd (3.01s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2498071976/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2498071976/001/logs.txt: (3.312864807s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.65s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-081000 apply -f testdata/invalidsvc.yaml
E0229 09:49:39.486734    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-081000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-081000: exit status 115 (624.1197ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32267 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-081000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.65s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-081000 config get cpus: exit status 14 (64.919ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-081000 config get cpus: exit status 14 (68.626586ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-081000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-081000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3979: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.61s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-081000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-081000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (632.814051ms)

                                                
                                                
-- stdout --
	* [functional-081000] minikube v1.32.0 on Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 09:51:10.770083    3902 out.go:291] Setting OutFile to fd 1 ...
	I0229 09:51:10.770368    3902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:51:10.770374    3902 out.go:304] Setting ErrFile to fd 2...
	I0229 09:51:10.770378    3902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:51:10.770558    3902 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 09:51:10.771999    3902 out.go:298] Setting JSON to false
	I0229 09:51:10.794175    3902 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1240,"bootTime":1709227830,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0229 09:51:10.794309    3902 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 09:51:10.816825    3902 out.go:177] * [functional-081000] minikube v1.32.0 on Darwin 14.3.1
	I0229 09:51:10.860202    3902 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 09:51:10.860278    3902 notify.go:220] Checking for updates...
	I0229 09:51:10.903088    3902 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	I0229 09:51:10.924089    3902 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0229 09:51:10.945311    3902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 09:51:10.966379    3902 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	I0229 09:51:10.988029    3902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 09:51:11.009774    3902 config.go:182] Loaded profile config "functional-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 09:51:11.010396    3902 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 09:51:11.065432    3902 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 09:51:11.065572    3902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 09:51:11.166523    3902 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:113 SystemTime:2024-02-29 17:51:11.156435239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 09:51:11.208953    3902 out.go:177] * Using the docker driver based on existing profile
	I0229 09:51:11.230085    3902 start.go:299] selected driver: docker
	I0229 09:51:11.230103    3902 start.go:903] validating driver "docker" against &{Name:functional-081000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-081000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 09:51:11.230193    3902 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 09:51:11.255163    3902 out.go:177] 
	W0229 09:51:11.276145    3902 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0229 09:51:11.297165    3902 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-081000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-081000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-081000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (672.840197ms)

                                                
                                                
-- stdout --
	* [functional-081000] minikube v1.32.0 sur Darwin 14.3.1
	  - MINIKUBE_LOCATION=18259
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 09:50:51.302707    3533 out.go:291] Setting OutFile to fd 1 ...
	I0229 09:50:51.302857    3533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:50:51.302862    3533 out.go:304] Setting ErrFile to fd 2...
	I0229 09:50:51.302866    3533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 09:50:51.303052    3533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
	I0229 09:50:51.304695    3533 out.go:298] Setting JSON to false
	I0229 09:50:51.327278    3533 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1221,"bootTime":1709227830,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0229 09:50:51.327379    3533 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 09:50:51.353662    3533 out.go:177] * [functional-081000] minikube v1.32.0 sur Darwin 14.3.1
	I0229 09:50:51.395035    3533 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 09:50:51.416811    3533 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig
	I0229 09:50:51.395059    3533 notify.go:220] Checking for updates...
	I0229 09:50:51.458800    3533 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0229 09:50:51.479760    3533 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 09:50:51.522839    3533 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube
	I0229 09:50:51.564883    3533 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 09:50:51.586475    3533 config.go:182] Loaded profile config "functional-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 09:50:51.587092    3533 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 09:50:51.641625    3533 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
	I0229 09:50:51.641777    3533 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0229 09:50:51.746883    3533 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:113 SystemTime:2024-02-29 17:50:51.737207835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213292032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
	I0229 09:50:51.789235    3533 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0229 09:50:51.810361    3533 start.go:299] selected driver: docker
	I0229 09:50:51.810382    3533 start.go:903] validating driver "docker" against &{Name:functional-081000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-081000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 09:50:51.810488    3533 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 09:50:51.836124    3533 out.go:177] 
	W0229 09:50:51.857428    3533 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0229 09:50:51.878288    3533 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [390604e0-f2cc-4e05-b56b-1b5d3f4a58d8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005595019s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-081000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-081000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-081000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-081000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3aeae00f-8827-476c-94ed-02babce1b726] Pending
helpers_test.go:344: "sp-pod" [3aeae00f-8827-476c-94ed-02babce1b726] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3aeae00f-8827-476c-94ed-02babce1b726] Running
E0229 09:50:40.927839    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005438223s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-081000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-081000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-081000 delete -f testdata/storage-provisioner/pod.yaml: (1.041970708s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-081000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7566551d-9593-44fa-84bc-0606d25e189a] Pending
helpers_test.go:344: "sp-pod" [7566551d-9593-44fa-84bc-0606d25e189a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7566551d-9593-44fa-84bc-0606d25e189a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004648533s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-081000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh -n functional-081000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 cp functional-081000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd4190098362/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh -n functional-081000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh -n functional-081000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-081000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-mmpbw" [b452c9d0-af96-4ade-8b70-5e8040e1ea47] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-mmpbw" [b452c9d0-af96-4ade-8b70-5e8040e1ea47] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.004270091s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-081000 exec mysql-859648c796-mmpbw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-081000 exec mysql-859648c796-mmpbw -- mysql -ppassword -e "show databases;": exit status 1 (154.21828ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-081000 exec mysql-859648c796-mmpbw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-081000 exec mysql-859648c796-mmpbw -- mysql -ppassword -e "show databases;": exit status 1 (139.056955ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-081000 exec mysql-859648c796-mmpbw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-081000 exec mysql-859648c796-mmpbw -- mysql -ppassword -e "show databases;": exit status 1 (120.44503ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-081000 exec mysql-859648c796-mmpbw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1408/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "sudo cat /etc/test/nested/copy/1408/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1408.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "sudo cat /etc/ssl/certs/1408.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1408.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "sudo cat /usr/share/ca-certificates/1408.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14082.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "sudo cat /etc/ssl/certs/14082.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14082.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "sudo cat /usr/share/ca-certificates/14082.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-081000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-081000 ssh "sudo systemctl is-active crio": exit status 1 (460.860179ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-amd64 license: (1.510314105s)
--- PASS: TestFunctional/parallel/License (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-081000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-081000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-081000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-081000 image ls --format short --alsologtostderr:
I0229 09:51:27.471786    4016 out.go:291] Setting OutFile to fd 1 ...
I0229 09:51:27.472154    4016 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 09:51:27.472159    4016 out.go:304] Setting ErrFile to fd 2...
I0229 09:51:27.472163    4016 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 09:51:27.472341    4016 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
I0229 09:51:27.473051    4016 config.go:182] Loaded profile config "functional-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 09:51:27.473150    4016 config.go:182] Loaded profile config "functional-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 09:51:27.473519    4016 cli_runner.go:164] Run: docker container inspect functional-081000 --format={{.State.Status}}
I0229 09:51:27.528088    4016 ssh_runner.go:195] Run: systemctl --version
I0229 09:51:27.528186    4016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-081000
I0229 09:51:27.582679    4016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50087 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/functional-081000/id_rsa Username:docker}
I0229 09:51:27.666994    4016 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-081000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-081000 | e8abdfc39dde5 | 30B    |
| docker.io/library/nginx                     | alpine            | 6913ed9ec8d00 | 42.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/google-containers/addon-resizer      | functional-081000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | e4720093a3c13 | 187MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-081000 image ls --format table --alsologtostderr:
I0229 09:51:28.381990    4042 out.go:291] Setting OutFile to fd 1 ...
I0229 09:51:28.382162    4042 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 09:51:28.382167    4042 out.go:304] Setting ErrFile to fd 2...
I0229 09:51:28.382171    4042 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 09:51:28.382371    4042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
I0229 09:51:28.382987    4042 config.go:182] Loaded profile config "functional-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 09:51:28.383082    4042 config.go:182] Loaded profile config "functional-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 09:51:28.383501    4042 cli_runner.go:164] Run: docker container inspect functional-081000 --format={{.State.Status}}
I0229 09:51:28.436325    4042 ssh_runner.go:195] Run: systemctl --version
I0229 09:51:28.436401    4042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-081000
I0229 09:51:28.490379    4042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50087 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/functional-081000/id_rsa Username:docker}
I0229 09:51:28.575787    4042 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-081000 image ls --format json --alsologtostderr:
[{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"e8abdfc39dde5ea5f3e525dfcfb488d05599e5665c6297952d17776a85859916","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-081000"],"size":"30"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"e3db313c6dbc065d4
ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-081000"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repo
Digests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"
],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-081000 image ls --format json --alsologtostderr:
I0229 09:51:27.776042    4025 out.go:291] Setting OutFile to fd 1 ...
I0229 09:51:27.776228    4025 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 09:51:27.776234    4025 out.go:304] Setting ErrFile to fd 2...
I0229 09:51:27.776238    4025 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 09:51:27.776428    4025 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
I0229 09:51:27.777021    4025 config.go:182] Loaded profile config "functional-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 09:51:27.777116    4025 config.go:182] Loaded profile config "functional-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 09:51:27.777495    4025 cli_runner.go:164] Run: docker container inspect functional-081000 --format={{.State.Status}}
I0229 09:51:27.830279    4025 ssh_runner.go:195] Run: systemctl --version
I0229 09:51:27.830355    4025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-081000
I0229 09:51:27.886158    4025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50087 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/functional-081000/id_rsa Username:docker}
I0229 09:51:27.969903    4025 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-081000 image ls --format yaml --alsologtostderr:
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-081000
size: "32900000"
- id: e8abdfc39dde5ea5f3e525dfcfb488d05599e5665c6297952d17776a85859916
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-081000
size: "30"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-081000 image ls --format yaml --alsologtostderr:
I0229 09:51:28.083541    4036 out.go:291] Setting OutFile to fd 1 ...
I0229 09:51:28.083753    4036 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 09:51:28.083758    4036 out.go:304] Setting ErrFile to fd 2...
I0229 09:51:28.083762    4036 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 09:51:28.083948    4036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
I0229 09:51:28.084617    4036 config.go:182] Loaded profile config "functional-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 09:51:28.084707    4036 config.go:182] Loaded profile config "functional-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 09:51:28.085092    4036 cli_runner.go:164] Run: docker container inspect functional-081000 --format={{.State.Status}}
I0229 09:51:28.137864    4036 ssh_runner.go:195] Run: systemctl --version
I0229 09:51:28.137945    4036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-081000
I0229 09:51:28.188107    4036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50087 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/functional-081000/id_rsa Username:docker}
I0229 09:51:28.272395    4036 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-081000 ssh pgrep buildkitd: exit status 1 (394.809615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image build -t localhost/my-image:functional-081000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 image build -t localhost/my-image:functional-081000 testdata/build --alsologtostderr: (4.715965462s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-081000 image build -t localhost/my-image:functional-081000 testdata/build --alsologtostderr:
I0229 09:51:29.082722    4058 out.go:291] Setting OutFile to fd 1 ...
I0229 09:51:29.083030    4058 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 09:51:29.083036    4058 out.go:304] Setting ErrFile to fd 2...
I0229 09:51:29.083040    4058 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 09:51:29.083310    4058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18259-932/.minikube/bin
I0229 09:51:29.083922    4058 config.go:182] Loaded profile config "functional-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 09:51:29.084674    4058 config.go:182] Loaded profile config "functional-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 09:51:29.086282    4058 cli_runner.go:164] Run: docker container inspect functional-081000 --format={{.State.Status}}
I0229 09:51:29.139540    4058 ssh_runner.go:195] Run: systemctl --version
I0229 09:51:29.139619    4058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-081000
I0229 09:51:29.190846    4058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50087 SSHKeyPath:/Users/jenkins/minikube-integration/18259-932/.minikube/machines/functional-081000/id_rsa Username:docker}
I0229 09:51:29.276488    4058 build_images.go:151] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.627073639.tar
I0229 09:51:29.276587    4058 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0229 09:51:29.292664    4058 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.627073639.tar
I0229 09:51:29.297185    4058 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.627073639.tar: stat -c "%s %y" /var/lib/minikube/build/build.627073639.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.627073639.tar': No such file or directory
I0229 09:51:29.297230    4058 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.627073639.tar --> /var/lib/minikube/build/build.627073639.tar (3072 bytes)
I0229 09:51:29.337546    4058 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.627073639
I0229 09:51:29.353432    4058 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.627073639 -xf /var/lib/minikube/build/build.627073639.tar
I0229 09:51:29.368956    4058 docker.go:360] Building image: /var/lib/minikube/build/build.627073639
I0229 09:51:29.369049    4058 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-081000 /var/lib/minikube/build/build.627073639
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 1.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:2d4f87176ec504444e017bf3c62154a92120f1e6868d3b932ba98d82925dcf41 done
#8 naming to localhost/my-image:functional-081000 done
#8 DONE 0.0s
I0229 09:51:33.677859    4058 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-081000 /var/lib/minikube/build/build.627073639: (4.308850007s)
I0229 09:51:33.677923    4058 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.627073639
I0229 09:51:33.693247    4058 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.627073639.tar
I0229 09:51:33.708414    4058 build_images.go:207] Built localhost/my-image:functional-081000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.627073639.tar
I0229 09:51:33.708445    4058 build_images.go:123] succeeded building to: functional-081000
I0229 09:51:33.708464    4058 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.364655173s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-081000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.44s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-081000 docker-env) && out/minikube-darwin-amd64 status -p functional-081000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-081000 docker-env) && out/minikube-darwin-amd64 status -p functional-081000": (1.163638671s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-081000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image load --daemon gcr.io/google-containers/addon-resizer:functional-081000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 image load --daemon gcr.io/google-containers/addon-resizer:functional-081000 --alsologtostderr: (4.940597574s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image load --daemon gcr.io/google-containers/addon-resizer:functional-081000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 image load --daemon gcr.io/google-containers/addon-resizer:functional-081000 --alsologtostderr: (2.580799221s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0229 09:49:59.966686    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.402499309s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-081000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image load --daemon gcr.io/google-containers/addon-resizer:functional-081000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 image load --daemon gcr.io/google-containers/addon-resizer:functional-081000 --alsologtostderr: (4.082194176s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (9.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image save gcr.io/google-containers/addon-resizer:functional-081000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 image save gcr.io/google-containers/addon-resizer:functional-081000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.477519797s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image rm gcr.io/google-containers/addon-resizer:functional-081000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.075388086s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-081000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 image save --daemon gcr.io/google-containers/addon-resizer:functional-081000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 image save --daemon gcr.io/google-containers/addon-resizer:functional-081000 --alsologtostderr: (1.20555386s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-081000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "484.616366ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "99.030728ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "428.767046ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "87.988039ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-081000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-081000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-081000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-081000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3474: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-081000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-081000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7b5b8e22-dad6-4776-b58f-a5f7ef3d1c37] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7b5b8e22-dad6-4776-b58f-a5f7ef3d1c37] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.005419533s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-081000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-081000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3505: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-081000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-081000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-lh6bc" [c0c1b466-d15f-4aff-95a6-dfe45ded28d1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-lh6bc" [c0c1b466-d15f-4aff-95a6-dfe45ded28d1] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005546308s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-081000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port113408809/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709229051927103000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port113408809/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709229051927103000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port113408809/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709229051927103000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port113408809/001/test-1709229051927103000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-081000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (415.221846ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 29 17:50 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 29 17:50 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 29 17:50 test-1709229051927103000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh cat /mount-9p/test-1709229051927103000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-081000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8bb0c696-6838-4ef4-882b-24ffc9753e23] Pending
helpers_test.go:344: "busybox-mount" [8bb0c696-6838-4ef4-882b-24ffc9753e23] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8bb0c696-6838-4ef4-882b-24ffc9753e23] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8bb0c696-6838-4ef4-882b-24ffc9753e23] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003832417s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-081000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-081000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port113408809/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 service list
functional_test.go:1455: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 service list: (1.029008032s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-darwin-amd64 -p functional-081000 service list -o json: (1.042054695s)
functional_test.go:1490: Took "1.042166433s" to run "out/minikube-darwin-amd64 -p functional-081000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-081000 service --namespace=default --https --url hello-node: signal: killed (15.006400943s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50427

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:50427
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-081000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port92499946/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-081000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (400.847187ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-081000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port92499946/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-081000 ssh "sudo umount -f /mount-9p": exit status 1 (369.312829ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-081000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-081000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port92499946/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-081000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup545486893/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-081000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup545486893/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-081000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup545486893/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-081000 ssh "findmnt -T" /mount1: exit status 1 (495.785714ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-081000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-081000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup545486893/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-081000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup545486893/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-081000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup545486893/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 service hello-node --url --format={{.IP}}
2024/02/29 09:51:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-081000 service hello-node --url --format={{.IP}}: signal: killed (15.00901787s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-081000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-081000 service hello-node --url: signal: killed (15.002778915s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50532

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:50532
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-081000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-081000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-081000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (22.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-606000 --driver=docker 
E0229 09:52:02.847997    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-606000 --driver=docker : (22.047377801s)
--- PASS: TestImageBuild/serial/Setup (22.05s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-606000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-606000: (4.734111661s)
--- PASS: TestImageBuild/serial/NormalBuild (4.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-606000
image_test.go:99: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-606000: (1.183180314s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.18s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-606000
image_test.go:133: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-606000: (1.043278822s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-606000
image_test.go:88: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-606000: (1.052247306s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.09s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-816000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-816000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (46.089040329s)
--- PASS: TestJSONOutput/start/Command (46.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-816000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-816000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-816000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-816000 --output=json --user=testUser: (10.829159182s)
--- PASS: TestJSONOutput/stop/Command (10.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.88s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-216000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-216000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (479.706138ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"085ba716-d397-4a86-a5dd-69a5f7adcdf9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-216000] minikube v1.32.0 on Darwin 14.3.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a360aff9-442d-4bc2-95e5-2804a0fb0c19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18259"}}
	{"specversion":"1.0","id":"4492a26a-35be-48eb-ab92-710b0289c87e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18259-932/kubeconfig"}}
	{"specversion":"1.0","id":"210a3b5d-9a9a-4409-9974-a3d6f79513d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"98e5f77e-fa77-4303-ac7d-da3d48e4758d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ec578409-503d-4b93-9c41-773719134c38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18259-932/.minikube"}}
	{"specversion":"1.0","id":"c7c729ec-f317-4db6-a8dd-64d0717658ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e9240846-7a4c-430d-a989-eaac85669812","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-216000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-216000
--- PASS: TestErrorJSONOutput (0.88s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-535000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-535000 --network=: (22.667519134s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-535000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-535000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-535000: (2.426476155s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.15s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.13s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-635000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-635000 --network=bridge: (21.815996262s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-635000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-635000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-635000: (2.264615866s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.13s)

                                                
                                    
x
+
TestKicExistingNetwork (25.15s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-222000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-222000 --network=existing-network: (22.517661971s)
helpers_test.go:175: Cleaning up "existing-network-222000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-222000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-222000: (2.285168749s)
--- PASS: TestKicExistingNetwork (25.15s)

                                                
                                    
x
+
TestKicCustomSubnet (23.76s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-242000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-242000 --subnet=192.168.60.0/24: (21.431908026s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-242000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-242000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-242000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-242000: (2.273928044s)
--- PASS: TestKicCustomSubnet (23.76s)

                                                
                                    
x
+
TestKicStaticIP (24.64s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-272000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-272000 --static-ip=192.168.200.200: (21.942642648s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-272000 ip
helpers_test.go:175: Cleaning up "static-ip-272000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-272000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-272000: (2.418113673s)
--- PASS: TestKicStaticIP (24.64s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (52.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-985000 --driver=docker 
E0229 10:04:18.991300    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/addons-551000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-985000 --driver=docker : (23.481355211s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-986000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-986000 --driver=docker : (22.637992193s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-985000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-986000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-986000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-986000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-986000: (2.460144941s)
helpers_test.go:175: Cleaning up "first-985000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-985000
E0229 10:04:50.612005    1408 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18259-932/.minikube/profiles/functional-081000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-985000: (2.435003879s)
--- PASS: TestMinikubeProfile (52.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-399000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-399000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.610304992s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-399000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-409000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-409000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.221249284s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-409000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.06s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-399000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-399000 --alsologtostderr -v=5: (2.062031594s)
--- PASS: TestMountStart/serial/DeleteFirst (2.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-409000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.56s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-409000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-409000: (1.557067546s)
--- PASS: TestMountStart/serial/Stop (1.56s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.11s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-409000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-409000: (8.110931405s)
--- PASS: TestMountStart/serial/RestartStopped (9.11s)

                                                
                                    
x
+
TestPreload (165.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-139000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-139000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m25.890477833s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-139000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-139000 image pull gcr.io/k8s-minikube/busybox: (5.343179177s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-139000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-139000: (10.756768709s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-139000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-139000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (1m0.450403399s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-139000 image list
helpers_test.go:175: Cleaning up "test-preload-139000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-139000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-139000: (2.450905456s)
--- PASS: TestPreload (165.19s)

                                                
                                    

Test skip (19/195)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 13.758414ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-lzvvw" [cd9703d7-ab6a-4304-bac7-11bf6802f7be] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008821479s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4lwbb" [85e9bc0a-3cff-4063-98bc-f4371f6db2b8] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005255605s
addons_test.go:340: (dbg) Run:  kubectl --context addons-551000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-551000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-551000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.581402874s)
addons_test.go:355: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (17.69s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-551000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-551000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-551000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0c379f49-dd3e-4b32-9892-cafd544d246e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0c379f49-dd3e-4b32-9892-cafd544d246e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006919905s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-551000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:282: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.82s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (17.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-081000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-081000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-gnt8p" [c049e7f2-6606-4bd6-9838-3c03e9674059] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-gnt8p" [c049e7f2-6606-4bd6-9838-3c03e9674059] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.003542148s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (17.19s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard