Test Report: Docker_macOS 17585

                    
                      ea770f64c27c5646b2ec1dfcd286218478f671de:2023-11-07:31788
                    
                

Test fail (27/184)

x
+
TestOffline (757.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-081000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-081000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m36.30695878s)

                                                
                                                
-- stdout --
	* [offline-docker-081000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node offline-docker-081000 in cluster offline-docker-081000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-081000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 16:26:40.394972    8880 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:26:40.395280    8880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:26:40.395285    8880 out.go:309] Setting ErrFile to fd 2...
	I1107 16:26:40.395289    8880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:26:40.395466    8880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 16:26:40.397095    8880 out.go:303] Setting JSON to false
	I1107 16:26:40.420697    8880 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":6974,"bootTime":1699396226,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1107 16:26:40.420808    8880 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1107 16:26:40.442289    8880 out.go:177] * [offline-docker-081000] minikube v1.32.0 on Darwin 14.1
	I1107 16:26:40.484023    8880 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 16:26:40.484060    8880 notify.go:220] Checking for updates...
	I1107 16:26:40.525983    8880 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	I1107 16:26:40.553077    8880 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 16:26:40.573980    8880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:26:40.594992    8880 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	I1107 16:26:40.615908    8880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 16:26:40.637226    8880 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 16:26:40.692643    8880 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.0 (126437)
	I1107 16:26:40.692790    8880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:26:40.856158    8880 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:148 SystemTime:2023-11-08 00:26:40.813286037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=
unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescriptio
n:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 16:26:40.877072    8880 out.go:177] * Using the docker driver based on user configuration
	I1107 16:26:40.898148    8880 start.go:298] selected driver: docker
	I1107 16:26:40.898165    8880 start.go:902] validating driver "docker" against <nil>
	I1107 16:26:40.898174    8880 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 16:26:40.901250    8880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:26:41.002649    8880 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:148 SystemTime:2023-11-08 00:26:40.992704089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=
unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescriptio
n:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 16:26:41.002812    8880 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 16:26:41.003031    8880 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 16:26:41.024128    8880 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 16:26:41.045491    8880 cni.go:84] Creating CNI manager for ""
	I1107 16:26:41.045540    8880 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1107 16:26:41.045558    8880 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1107 16:26:41.045589    8880 start_flags.go:323] config:
	{Name:offline-docker-081000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:offline-docker-081000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 16:26:41.067291    8880 out.go:177] * Starting control plane node offline-docker-081000 in cluster offline-docker-081000
	I1107 16:26:41.109325    8880 cache.go:121] Beginning downloading kic base image for docker with docker
	I1107 16:26:41.151304    8880 out.go:177] * Pulling base image ...
	I1107 16:26:41.214001    8880 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:26:41.214042    8880 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 16:26:41.214055    8880 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1107 16:26:41.214068    8880 cache.go:56] Caching tarball of preloaded images
	I1107 16:26:41.214222    8880 preload.go:174] Found /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 16:26:41.214233    8880 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1107 16:26:41.215114    8880 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/offline-docker-081000/config.json ...
	I1107 16:26:41.215181    8880 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/offline-docker-081000/config.json: {Name:mkb7c0a5cf6a0eefeeb3bd242d7f4d21290b4763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 16:26:41.267401    8880 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 16:26:41.267419    8880 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 16:26:41.267438    8880 cache.go:194] Successfully downloaded all kic artifacts
	I1107 16:26:41.267483    8880 start.go:365] acquiring machines lock for offline-docker-081000: {Name:mkff9e93b7e58d8c0d191e3ee1f98d414ec5124a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 16:26:41.267805    8880 start.go:369] acquired machines lock for "offline-docker-081000" in 308.375µs
	I1107 16:26:41.267832    8880 start.go:93] Provisioning new machine with config: &{Name:offline-docker-081000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:offline-docker-081000 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 16:26:41.267937    8880 start.go:125] createHost starting for "" (driver="docker")
	I1107 16:26:41.290725    8880 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 16:26:41.291051    8880 start.go:159] libmachine.API.Create for "offline-docker-081000" (driver="docker")
	I1107 16:26:41.291101    8880 client.go:168] LocalClient.Create starting
	I1107 16:26:41.291303    8880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 16:26:41.291389    8880 main.go:141] libmachine: Decoding PEM data...
	I1107 16:26:41.291424    8880 main.go:141] libmachine: Parsing certificate...
	I1107 16:26:41.291568    8880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 16:26:41.291637    8880 main.go:141] libmachine: Decoding PEM data...
	I1107 16:26:41.291654    8880 main.go:141] libmachine: Parsing certificate...
	I1107 16:26:41.292438    8880 cli_runner.go:164] Run: docker network inspect offline-docker-081000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 16:26:41.383687    8880 cli_runner.go:211] docker network inspect offline-docker-081000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 16:26:41.383789    8880 network_create.go:281] running [docker network inspect offline-docker-081000] to gather additional debugging logs...
	I1107 16:26:41.383811    8880 cli_runner.go:164] Run: docker network inspect offline-docker-081000
	W1107 16:26:41.435654    8880 cli_runner.go:211] docker network inspect offline-docker-081000 returned with exit code 1
	I1107 16:26:41.435685    8880 network_create.go:284] error running [docker network inspect offline-docker-081000]: docker network inspect offline-docker-081000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-081000 not found
	I1107 16:26:41.435700    8880 network_create.go:286] output of [docker network inspect offline-docker-081000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-081000 not found
	
	** /stderr **
	I1107 16:26:41.435829    8880 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:26:41.508905    8880 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:26:41.509315    8880 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022ecd70}
	I1107 16:26:41.509336    8880 network_create.go:124] attempt to create docker network offline-docker-081000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1107 16:26:41.509401    8880 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-081000 offline-docker-081000
	I1107 16:26:41.600384    8880 network_create.go:108] docker network offline-docker-081000 192.168.58.0/24 created
	I1107 16:26:41.600423    8880 kic.go:121] calculated static IP "192.168.58.2" for the "offline-docker-081000" container
	I1107 16:26:41.600551    8880 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 16:26:41.654064    8880 cli_runner.go:164] Run: docker volume create offline-docker-081000 --label name.minikube.sigs.k8s.io=offline-docker-081000 --label created_by.minikube.sigs.k8s.io=true
	I1107 16:26:41.708018    8880 oci.go:103] Successfully created a docker volume offline-docker-081000
	I1107 16:26:41.708128    8880 cli_runner.go:164] Run: docker run --rm --name offline-docker-081000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-081000 --entrypoint /usr/bin/test -v offline-docker-081000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 16:26:42.386197    8880 oci.go:107] Successfully prepared a docker volume offline-docker-081000
	I1107 16:26:42.386238    8880 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:26:42.386249    8880 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 16:26:42.386344    8880 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-081000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 16:32:41.227264    8880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:32:41.227385    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:32:41.279821    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:32:41.279953    8880 retry.go:31] will retry after 127.587287ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:41.408488    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:32:41.460445    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:32:41.460538    8880 retry.go:31] will retry after 208.766864ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:41.670380    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:32:41.725599    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:32:41.725708    8880 retry.go:31] will retry after 605.132553ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:42.331824    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:32:42.384622    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	W1107 16:32:42.384725    8880 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	
	W1107 16:32:42.384750    8880 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:42.384806    8880 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:32:42.384866    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:32:42.434381    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:32:42.434475    8880 retry.go:31] will retry after 222.030087ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:42.657396    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:32:42.710872    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:32:42.710982    8880 retry.go:31] will retry after 476.39547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:43.189835    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:32:43.243219    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:32:43.243323    8880 retry.go:31] will retry after 493.433616ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:43.739076    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:32:43.792394    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	W1107 16:32:43.792496    8880 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	
	W1107 16:32:43.792517    8880 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:43.792532    8880 start.go:128] duration metric: createHost completed in 6m2.589415673s
	I1107 16:32:43.792540    8880 start.go:83] releasing machines lock for "offline-docker-081000", held for 6m2.58955958s
	W1107 16:32:43.792553    8880 start.go:691] error starting host: creating host: create host timed out in 360.000000 seconds
	I1107 16:32:43.792999    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:32:43.843108    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	I1107 16:32:43.843167    8880 delete.go:82] Unable to get host status for offline-docker-081000, assuming it has already been deleted: state: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	W1107 16:32:43.843241    8880 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1107 16:32:43.843252    8880 start.go:706] Will try again in 5 seconds ...
	I1107 16:32:48.845337    8880 start.go:365] acquiring machines lock for offline-docker-081000: {Name:mkff9e93b7e58d8c0d191e3ee1f98d414ec5124a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 16:32:48.845535    8880 start.go:369] acquired machines lock for "offline-docker-081000" in 136.694µs
	I1107 16:32:48.845572    8880 start.go:96] Skipping create...Using existing machine configuration
	I1107 16:32:48.845588    8880 fix.go:54] fixHost starting: 
	I1107 16:32:48.846123    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:32:48.897700    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	I1107 16:32:48.897743    8880 fix.go:102] recreateIfNeeded on offline-docker-081000: state= err=unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:48.897761    8880 fix.go:107] machineExists: false. err=machine does not exist
	I1107 16:32:48.940881    8880 out.go:177] * docker "offline-docker-081000" container is missing, will recreate.
	I1107 16:32:48.962003    8880 delete.go:124] DEMOLISHING offline-docker-081000 ...
	I1107 16:32:48.962167    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:32:49.013104    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	W1107 16:32:49.013167    8880 stop.go:75] unable to get state: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:49.013187    8880 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:49.013565    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:32:49.063658    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	I1107 16:32:49.063731    8880 delete.go:82] Unable to get host status for offline-docker-081000, assuming it has already been deleted: state: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:49.063823    8880 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-081000
	W1107 16:32:49.113686    8880 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-081000 returned with exit code 1
	I1107 16:32:49.113723    8880 kic.go:371] could not find the container offline-docker-081000 to remove it. will try anyways
	I1107 16:32:49.113798    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:32:49.163704    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	W1107 16:32:49.163752    8880 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:49.163830    8880 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-081000 /bin/bash -c "sudo init 0"
	W1107 16:32:49.213896    8880 cli_runner.go:211] docker exec --privileged -t offline-docker-081000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1107 16:32:49.213929    8880 oci.go:650] error shutdown offline-docker-081000: docker exec --privileged -t offline-docker-081000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:50.215354    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:32:50.268322    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	I1107 16:32:50.268367    8880 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:50.268382    8880 oci.go:664] temporary error: container offline-docker-081000 status is  but expect it to be exited
	I1107 16:32:50.268407    8880 retry.go:31] will retry after 587.449913ms: couldn't verify container is exited. %v: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:50.858246    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:32:50.912149    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	I1107 16:32:50.912195    8880 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:50.912205    8880 oci.go:664] temporary error: container offline-docker-081000 status is  but expect it to be exited
	I1107 16:32:50.912228    8880 retry.go:31] will retry after 603.992233ms: couldn't verify container is exited. %v: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:51.516720    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:32:51.572166    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	I1107 16:32:51.572222    8880 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:51.572232    8880 oci.go:664] temporary error: container offline-docker-081000 status is  but expect it to be exited
	I1107 16:32:51.572254    8880 retry.go:31] will retry after 1.518393669s: couldn't verify container is exited. %v: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:53.091138    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:32:53.144395    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	I1107 16:32:53.144443    8880 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:53.144454    8880 oci.go:664] temporary error: container offline-docker-081000 status is  but expect it to be exited
	I1107 16:32:53.144477    8880 retry.go:31] will retry after 1.522683538s: couldn't verify container is exited. %v: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:54.667706    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:32:54.722975    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	I1107 16:32:54.723022    8880 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:54.723033    8880 oci.go:664] temporary error: container offline-docker-081000 status is  but expect it to be exited
	I1107 16:32:54.723060    8880 retry.go:31] will retry after 1.618221704s: couldn't verify container is exited. %v: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:56.343605    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:32:56.448913    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	I1107 16:32:56.448953    8880 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:32:56.448964    8880 oci.go:664] temporary error: container offline-docker-081000 status is  but expect it to be exited
	I1107 16:32:56.448997    8880 retry.go:31] will retry after 4.964569409s: couldn't verify container is exited. %v: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:33:01.415776    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:33:01.470371    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:01.470419    8880 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:33:01.470431    8880 oci.go:664] temporary error: container offline-docker-081000 status is  but expect it to be exited
	I1107 16:33:01.470453    8880 retry.go:31] will retry after 8.122243732s: couldn't verify container is exited. %v: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:33:09.594732    8880 cli_runner.go:164] Run: docker container inspect offline-docker-081000 --format={{.State.Status}}
	W1107 16:33:09.649966    8880 cli_runner.go:211] docker container inspect offline-docker-081000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:09.650020    8880 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:33:09.650033    8880 oci.go:664] temporary error: container offline-docker-081000 status is  but expect it to be exited
	I1107 16:33:09.650060    8880 oci.go:88] couldn't shut down offline-docker-081000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	 
	I1107 16:33:09.650126    8880 cli_runner.go:164] Run: docker rm -f -v offline-docker-081000
	I1107 16:33:09.700815    8880 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-081000
	W1107 16:33:09.750908    8880 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-081000 returned with exit code 1
	I1107 16:33:09.751028    8880 cli_runner.go:164] Run: docker network inspect offline-docker-081000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:33:09.802121    8880 cli_runner.go:164] Run: docker network rm offline-docker-081000
	I1107 16:33:09.905873    8880 fix.go:114] Sleeping 1 second for extra luck!
	I1107 16:33:10.906338    8880 start.go:125] createHost starting for "" (driver="docker")
	I1107 16:33:10.928019    8880 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 16:33:10.928224    8880 start.go:159] libmachine.API.Create for "offline-docker-081000" (driver="docker")
	I1107 16:33:10.928257    8880 client.go:168] LocalClient.Create starting
	I1107 16:33:10.928463    8880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 16:33:10.928550    8880 main.go:141] libmachine: Decoding PEM data...
	I1107 16:33:10.928579    8880 main.go:141] libmachine: Parsing certificate...
	I1107 16:33:10.928672    8880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 16:33:10.928744    8880 main.go:141] libmachine: Decoding PEM data...
	I1107 16:33:10.928763    8880 main.go:141] libmachine: Parsing certificate...
	I1107 16:33:10.950313    8880 cli_runner.go:164] Run: docker network inspect offline-docker-081000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 16:33:11.002382    8880 cli_runner.go:211] docker network inspect offline-docker-081000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 16:33:11.002488    8880 network_create.go:281] running [docker network inspect offline-docker-081000] to gather additional debugging logs...
	I1107 16:33:11.002504    8880 cli_runner.go:164] Run: docker network inspect offline-docker-081000
	W1107 16:33:11.053147    8880 cli_runner.go:211] docker network inspect offline-docker-081000 returned with exit code 1
	I1107 16:33:11.053180    8880 network_create.go:284] error running [docker network inspect offline-docker-081000]: docker network inspect offline-docker-081000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-081000 not found
	I1107 16:33:11.053198    8880 network_create.go:286] output of [docker network inspect offline-docker-081000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-081000 not found
	
	** /stderr **
	I1107 16:33:11.053369    8880 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:33:11.105089    8880 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:33:11.106444    8880 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:33:11.108049    8880 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:33:11.109523    8880 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:33:11.109945    8880 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022ed810}
	I1107 16:33:11.109957    8880 network_create.go:124] attempt to create docker network offline-docker-081000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I1107 16:33:11.110028    8880 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-081000 offline-docker-081000
	I1107 16:33:11.195690    8880 network_create.go:108] docker network offline-docker-081000 192.168.85.0/24 created
	I1107 16:33:11.195728    8880 kic.go:121] calculated static IP "192.168.85.2" for the "offline-docker-081000" container
	I1107 16:33:11.195865    8880 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 16:33:11.247664    8880 cli_runner.go:164] Run: docker volume create offline-docker-081000 --label name.minikube.sigs.k8s.io=offline-docker-081000 --label created_by.minikube.sigs.k8s.io=true
	I1107 16:33:11.298795    8880 oci.go:103] Successfully created a docker volume offline-docker-081000
	I1107 16:33:11.298933    8880 cli_runner.go:164] Run: docker run --rm --name offline-docker-081000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-081000 --entrypoint /usr/bin/test -v offline-docker-081000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 16:33:11.700374    8880 oci.go:107] Successfully prepared a docker volume offline-docker-081000
	I1107 16:33:11.700419    8880 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:33:11.700432    8880 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 16:33:11.700528    8880 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-081000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 16:39:10.914628    8880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:39:10.914751    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:10.967992    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:39:10.968101    8880 retry.go:31] will retry after 340.938697ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:11.309566    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:11.360927    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:39:11.361042    8880 retry.go:31] will retry after 217.154717ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:11.578691    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:11.634058    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:39:11.634155    8880 retry.go:31] will retry after 621.887086ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:12.258386    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:12.311810    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	W1107 16:39:12.311927    8880 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	
	W1107 16:39:12.311948    8880 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:12.312008    8880 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:39:12.312067    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:12.363131    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:39:12.363232    8880 retry.go:31] will retry after 172.801803ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:12.537237    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:12.589150    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:39:12.589249    8880 retry.go:31] will retry after 546.80004ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:13.136709    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:13.188574    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:39:13.188677    8880 retry.go:31] will retry after 631.282644ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:13.821317    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:13.874010    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	W1107 16:39:13.874114    8880 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	
	W1107 16:39:13.874142    8880 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:13.874157    8880 start.go:128] duration metric: createHost completed in 6m2.982128984s
	I1107 16:39:13.874217    8880 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:39:13.874280    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:13.924005    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:39:13.924098    8880 retry.go:31] will retry after 223.397855ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:14.147965    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:14.201156    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:39:14.201245    8880 retry.go:31] will retry after 244.362164ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:14.448019    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:14.502441    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:39:14.502528    8880 retry.go:31] will retry after 704.788486ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:15.209272    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:15.263909    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	W1107 16:39:15.264008    8880 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	
	W1107 16:39:15.264030    8880 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:15.264098    8880 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:39:15.264164    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:15.314154    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:39:15.314245    8880 retry.go:31] will retry after 278.676186ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:15.593858    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:15.648792    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:39:15.648881    8880 retry.go:31] will retry after 371.336323ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:16.022564    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:16.074798    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	I1107 16:39:16.074905    8880 retry.go:31] will retry after 310.310509ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:16.385654    8880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000
	W1107 16:39:16.436905    8880 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000 returned with exit code 1
	W1107 16:39:16.437000    8880 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	
	W1107 16:39:16.437018    8880 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-081000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-081000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000
	I1107 16:39:16.437028    8880 fix.go:56] fixHost completed within 6m27.606735859s
	I1107 16:39:16.437036    8880 start.go:83] releasing machines lock for "offline-docker-081000", held for 6m27.606781624s
	W1107 16:39:16.437123    8880 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-081000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-081000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1107 16:39:16.478923    8880 out.go:177] 
	W1107 16:39:16.500060    8880 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1107 16:39:16.500106    8880 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1107 16:39:16.500143    8880 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1107 16:39:16.520937    8880 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-081000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:523: *** TestOffline FAILED at 2023-11-07 16:39:16.577224 -0800 PST m=+5935.565639617
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-081000
helpers_test.go:235: (dbg) docker inspect offline-docker-081000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-081000",
	        "Id": "6080019cabe63d39743e7bdec9d29f6d3a2cd120a2d9f3ef6543cd90e841079e",
	        "Created": "2023-11-08T00:33:11.157564455Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-081000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-081000 -n offline-docker-081000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-081000 -n offline-docker-081000: exit status 7 (112.529256ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 16:39:16.750124    9490 status.go:249] status error: host: state: unknown state "offline-docker-081000": docker container inspect offline-docker-081000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-081000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-081000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-081000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-081000
--- FAIL: TestOffline (757.23s)

                                                
                                    
x
+
TestCertOptions (7200.765s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-982000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E1107 16:53:43.742348    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:53:49.655660    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 16:54:06.593658    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 16:58:43.815447    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:59:06.667672    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (8m29s)
	TestCertOptions (8m8s)
	TestNetworkPlugins (33m41s)
	TestNetworkPlugins/group (33m41s)

                                                
                                                
goroutine 2155 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc000602b60, 0xc000a07b80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc0004d39a0?, {0x525f460, 0x2a, 0x2a}, {0x10b0105?, 0xc000068180?, 0x5280be0?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc0004d39a0)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00008a6f0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 12 [select, 2 minutes]:
go.opencensus.io/stats/view.(*worker).start(0xc000185400)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2128 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc00238e580, 0xc0027ae2a0)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 615
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1853 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000602680)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000602680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000602680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000602680, 0xc00047e500)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1847
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1864 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0025f1a00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0025f1a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc0025f1a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:305 +0xb4
testing.tRunner(0xc0025f1a00, 0x3b30d28)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2126 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x4c54ff00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc002baaa20?, 0xc002bfeaf0?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002baaa20, {0xc002bfeaf0, 0x510, 0x510})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00081c1b0, {0xc002bfeaf0?, 0xc00237ae68?, 0xc00237ae68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002be6510, {0x3f7d200, 0xc00081c1b0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f7d280, 0xc002be6510}, {0x3f7d200, 0xc00081c1b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0x6f6853205d74696e?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 615
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 65 [select, 2 minutes]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1157 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 16
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1153 +0x171

                                                
                                                
goroutine 1854 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000602d00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000602d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000602d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000602d00, 0xc00047e580)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1847
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 614 [syscall, 8 minutes]:
syscall.syscall6(0x1010585?, 0xc000bcb8f8?, 0xc000bcb7e8?, 0xc000bcb918?, 0x100c000bcb8e0?, 0x1000000000003?, 0x4d2db8f8?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000bcb890?, 0x1010905?, 0x90?, 0x30524c0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc000794380?, 0xc000bcb8c4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000a00f90)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00238e6e0)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc0022f9520?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc0022f9520, 0xc00238e6e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestCertOptions(0xc0022f9520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x40e
testing.tRunner(0xc0022f9520, 0x3b30c60)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2173 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc00238e6e0, 0xc0027ae4e0)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 614
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 2127 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x4d32a140, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc002baaae0?, 0xc000681463?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002baaae0, {0xc000681463, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00081c1f8, {0xc000681463?, 0xc00237b668?, 0xc00237b668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002be6540, {0x3f7d200, 0xc00081c1f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f7d280, 0xc002be6540}, {0x3f7d200, 0xc00081c1f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc00262c6e0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 615
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 923 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00248f0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 830
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 615 [syscall, 8 minutes]:
syscall.syscall6(0x1010585?, 0xc000a0ba98?, 0xc000a0b988?, 0xc000a0bab8?, 0x100c000a0ba80?, 0x1000000000003?, 0x4d2db8f8?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000a0ba30?, 0x1010905?, 0x90?, 0x30524c0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc00087d9d0?, 0xc000a0ba64, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000a00240)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00238e580)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc0022f96c0?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc0022f96c0, 0xc00238e580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestCertExpiration(0xc0022f96c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2d7
testing.tRunner(0xc0022f96c0, 0x3b30c58)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 150 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000ad6a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 139
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 151 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0022ac8c0, 0xc00098c060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 139
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/transport/cache.go:122 +0x594

                                                
                                                
goroutine 154 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0022ac890, 0x2d)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3f7a1b0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000ad6900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0022ac8c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f7e720, 0xc00086fe90}, 0x1, 0xc00098c060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0001117d0?, 0x15e84c5?, 0xc000ad6a20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 151
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 155 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fa1198, 0xc00098c060}, 0xc000114f50, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3fa1198, 0xc00098c060}, 0x21?, 0x3b31038?, 0x31345bd?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fa1198?, 0xc00098c060?}, 0xc0005a3860?, 0x1137540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x1138405?, 0xc0005a3860?, 0xc00079e9c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 151
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 156 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 155
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 908 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0027536d0, 0x2c)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3f7a1b0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00248efc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002753700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00081dcb0?, {0x3f7e720, 0xc000d70f60}, 0x1, 0xc00098c060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0029905a0?, 0x3b9aca00, 0x0, 0xd0?, 0x104473c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x117bce5?, 0xc0022a2f20?, 0xc00098de60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 924
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1773 [chan receive, 33 minutes]:
testing.(*T).Run(0xc0025f0d00, {0x30e4394?, 0x55e77789d44?}, 0xc0022d2018)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0025f0d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0025f0d00, 0x3b30d40)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1855 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000502820)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000502820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000502820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000502820, 0xc00047e600)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1847
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 924 [chan receive, 113 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002753700, 0xc00098c060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 830
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/transport/cache.go:122 +0x594

                                                
                                                
goroutine 1847 [chan receive, 33 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc0005a2680, 0xc0022d2018)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1773
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1330 [select, 111 minutes]:
net/http.(*persistConn).readLoop(0xc0028b57a0)
	/usr/local/go/src/net/http/transport.go:2238 +0xd25
created by net/http.(*Transport).dialConn in goroutine 1348
	/usr/local/go/src/net/http/transport.go:1776 +0x169f

                                                
                                                
goroutine 692 [IO wait, 115 minutes]:
internal/poll.runtime_pollWait(0x4c54fe08, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00047e980?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00047e980)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00047e980)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc002360360)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc002360360)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc000672b40, {0x3f947a0, 0xc002360360})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc000672b40)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00244d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 689
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x13a

                                                
                                                
goroutine 1852 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0005a3d40)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0005a3d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0005a3d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0005a3d40, 0xc00047e400)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1847
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1862 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0025f1520)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0025f1520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0025f1520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:156 +0x86
testing.tRunner(0xc0025f1520, 0x3b30d90)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1331 [select, 111 minutes]:
net/http.(*persistConn).writeLoop(0xc0028b57a0)
	/usr/local/go/src/net/http/transport.go:2421 +0xe5
created by net/http.(*Transport).dialConn in goroutine 1348
	/usr/local/go/src/net/http/transport.go:1777 +0x16f1

                                                
                                                
goroutine 1249 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc0029a0420, 0xc002927ce0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 817
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1848 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0005a2b60)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0005a2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0005a2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0005a2b60, 0xc00047e200)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1847
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 910 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 909
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1850 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0005a3860)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0005a3860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0005a3860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0005a3860, 0xc00047e300)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1847
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 909 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fa1198, 0xc00098c060}, 0xc000ae1f50, 0xc000ad6898?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3fa1198, 0xc00098c060}, 0x1?, 0x1?, 0xc000ae1fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fa1198?, 0xc00098c060?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000ae1fd0?, 0x117bd47?, 0xc00080e600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 924
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1856 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0005029c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0005029c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0005029c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0005029c0, 0xc00047e680)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1847
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1863 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0025f16c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0025f16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0025f16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:228 +0x39
testing.tRunner(0xc0025f16c0, 0x3b30d10)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1849 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0005a31e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0005a31e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0005a31e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0005a31e0, 0xc00047e280)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1847
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1836 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0025f0b60)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0025f0b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc0029f86c0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc0025f0b60, 0x3b30d88)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1851 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0005a3a00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0005a3a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0005a3a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc0005a3a00, 0xc00047e380)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1847
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1861 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0025f09c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0025f09c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0025f09c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:98 +0x89
testing.tRunner(0xc0025f09c0, 0x3b30d68)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2172 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x4c5502e0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc002baa900?, 0xc000681863?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002baa900, {0xc000681863, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00081c1d0, {0xc000681863?, 0xc00010fe68?, 0xc00010fe68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002be6810, {0x3f7d200, 0xc00081c1d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f7d280, 0xc002be6810}, {0x3f7d200, 0xc00081c1d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002b1c540?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 614
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1775 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0025f1040)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0025f1040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0025f1040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0025f1040, 0x3b30d58)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1774 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc0009b8960)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc0025f0ea0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0025f0ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc0025f0ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc0025f0ea0, 0x3b30d48)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1065 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc00238eb00, 0xc0023886c0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1064
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1283 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc0028dd1e0, 0xc0027afe00)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1282
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 2171 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x4c54fd10, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc002baa840?, 0xc002bffae4?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002baa840, {0xc002bffae4, 0x51c, 0x51c})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00081c170, {0xc002bffae4?, 0xc000a1d440?, 0xc002379668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002be67e0, {0x3f7d200, 0xc00081c170})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f7d280, 0xc002be67e0}, {0x3f7d200, 0xc00081c170}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0027ae420?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 614
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1210 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc002947760, 0xc002927020)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1209
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                    
x
+
TestDockerFlags (752.24s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-907000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E1107 16:43:43.766122    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:44:06.618100    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 16:48:26.817606    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:48:43.754054    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:49:06.605683    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-907000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m30.939945232s)

                                                
                                                
-- stdout --
	* [docker-flags-907000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node docker-flags-907000 in cluster docker-flags-907000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-907000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 16:39:41.189835    9631 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:39:41.190052    9631 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:39:41.190057    9631 out.go:309] Setting ErrFile to fd 2...
	I1107 16:39:41.190061    9631 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:39:41.190236    9631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 16:39:41.191709    9631 out.go:303] Setting JSON to false
	I1107 16:39:41.214217    9631 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":7755,"bootTime":1699396226,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1107 16:39:41.214318    9631 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1107 16:39:41.236255    9631 out.go:177] * [docker-flags-907000] minikube v1.32.0 on Darwin 14.1
	I1107 16:39:41.278859    9631 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 16:39:41.257801    9631 notify.go:220] Checking for updates...
	I1107 16:39:41.320627    9631 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	I1107 16:39:41.341907    9631 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 16:39:41.363963    9631 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:39:41.385785    9631 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	I1107 16:39:41.406841    9631 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 16:39:41.428752    9631 config.go:182] Loaded profile config "force-systemd-flag-919000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 16:39:41.428915    9631 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 16:39:41.484712    9631 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.0 (126437)
	I1107 16:39:41.484848    9631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:39:41.584700    9631 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:false NGoroutines:198 SystemTime:2023-11-08 00:39:41.574964021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescript
ion:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription
:Docker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 16:39:41.606326    9631 out.go:177] * Using the docker driver based on user configuration
	I1107 16:39:41.627295    9631 start.go:298] selected driver: docker
	I1107 16:39:41.627324    9631 start.go:902] validating driver "docker" against <nil>
	I1107 16:39:41.627338    9631 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 16:39:41.631783    9631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:39:41.731876    9631 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:false NGoroutines:198 SystemTime:2023-11-08 00:39:41.723022008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescript
ion:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription
:Docker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 16:39:41.732067    9631 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 16:39:41.732250    9631 start_flags.go:926] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1107 16:39:41.753800    9631 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 16:39:41.775676    9631 cni.go:84] Creating CNI manager for ""
	I1107 16:39:41.775724    9631 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1107 16:39:41.775741    9631 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1107 16:39:41.775765    9631 start_flags.go:323] config:
	{Name:docker-flags-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:docker-flags-907000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 16:39:41.797664    9631 out.go:177] * Starting control plane node docker-flags-907000 in cluster docker-flags-907000
	I1107 16:39:41.839548    9631 cache.go:121] Beginning downloading kic base image for docker with docker
	I1107 16:39:41.862602    9631 out.go:177] * Pulling base image ...
	I1107 16:39:41.904677    9631 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:39:41.904752    9631 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1107 16:39:41.904770    9631 cache.go:56] Caching tarball of preloaded images
	I1107 16:39:41.904825    9631 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 16:39:41.905021    9631 preload.go:174] Found /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 16:39:41.905035    9631 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1107 16:39:41.905148    9631 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/docker-flags-907000/config.json ...
	I1107 16:39:41.905174    9631 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/docker-flags-907000/config.json: {Name:mkfcb3811746dda28ec76cdaef8df141f0b5dd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 16:39:41.956861    9631 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 16:39:41.957087    9631 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 16:39:41.957118    9631 cache.go:194] Successfully downloaded all kic artifacts
	I1107 16:39:41.957182    9631 start.go:365] acquiring machines lock for docker-flags-907000: {Name:mk22374b81cc5bbb19eb7a3156119787b2788e51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 16:39:41.957333    9631 start.go:369] acquired machines lock for "docker-flags-907000" in 133.161µs
	I1107 16:39:41.957357    9631 start.go:93] Provisioning new machine with config: &{Name:docker-flags-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:docker-flags-907000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 16:39:41.957416    9631 start.go:125] createHost starting for "" (driver="docker")
	I1107 16:39:41.981440    9631 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 16:39:41.981805    9631 start.go:159] libmachine.API.Create for "docker-flags-907000" (driver="docker")
	I1107 16:39:41.981859    9631 client.go:168] LocalClient.Create starting
	I1107 16:39:41.982028    9631 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 16:39:41.982119    9631 main.go:141] libmachine: Decoding PEM data...
	I1107 16:39:41.982153    9631 main.go:141] libmachine: Parsing certificate...
	I1107 16:39:41.982277    9631 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 16:39:41.982353    9631 main.go:141] libmachine: Decoding PEM data...
	I1107 16:39:41.982371    9631 main.go:141] libmachine: Parsing certificate...
	I1107 16:39:41.983357    9631 cli_runner.go:164] Run: docker network inspect docker-flags-907000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 16:39:42.035122    9631 cli_runner.go:211] docker network inspect docker-flags-907000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 16:39:42.035236    9631 network_create.go:281] running [docker network inspect docker-flags-907000] to gather additional debugging logs...
	I1107 16:39:42.035269    9631 cli_runner.go:164] Run: docker network inspect docker-flags-907000
	W1107 16:39:42.085379    9631 cli_runner.go:211] docker network inspect docker-flags-907000 returned with exit code 1
	I1107 16:39:42.085424    9631 network_create.go:284] error running [docker network inspect docker-flags-907000]: docker network inspect docker-flags-907000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-907000 not found
	I1107 16:39:42.085437    9631 network_create.go:286] output of [docker network inspect docker-flags-907000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-907000 not found
	
	** /stderr **
	I1107 16:39:42.085554    9631 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:39:42.137639    9631 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:39:42.139073    9631 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:39:42.139427    9631 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022fc1e0}
	I1107 16:39:42.139444    9631 network_create.go:124] attempt to create docker network docker-flags-907000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1107 16:39:42.139506    9631 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-907000 docker-flags-907000
	W1107 16:39:42.190907    9631 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-907000 docker-flags-907000 returned with exit code 1
	W1107 16:39:42.190941    9631 network_create.go:149] failed to create docker network docker-flags-907000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-907000 docker-flags-907000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1107 16:39:42.190960    9631 network_create.go:116] failed to create docker network docker-flags-907000 192.168.67.0/24, will retry: subnet is taken
	I1107 16:39:42.192576    9631 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:39:42.192929    9631 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022fd290}
	I1107 16:39:42.192941    9631 network_create.go:124] attempt to create docker network docker-flags-907000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1107 16:39:42.193002    9631 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-907000 docker-flags-907000
	I1107 16:39:42.279087    9631 network_create.go:108] docker network docker-flags-907000 192.168.76.0/24 created
	I1107 16:39:42.279127    9631 kic.go:121] calculated static IP "192.168.76.2" for the "docker-flags-907000" container
	I1107 16:39:42.279232    9631 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 16:39:42.332394    9631 cli_runner.go:164] Run: docker volume create docker-flags-907000 --label name.minikube.sigs.k8s.io=docker-flags-907000 --label created_by.minikube.sigs.k8s.io=true
	I1107 16:39:42.384342    9631 oci.go:103] Successfully created a docker volume docker-flags-907000
	I1107 16:39:42.384478    9631 cli_runner.go:164] Run: docker run --rm --name docker-flags-907000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-907000 --entrypoint /usr/bin/test -v docker-flags-907000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 16:39:42.808769    9631 oci.go:107] Successfully prepared a docker volume docker-flags-907000
	I1107 16:39:42.808828    9631 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:39:42.808840    9631 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 16:39:42.808949    9631 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-907000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 16:45:41.970152    9631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:45:41.970294    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:45:42.025057    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:45:42.025185    9631 retry.go:31] will retry after 239.810069ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:42.265489    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:45:42.316810    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:45:42.316903    9631 retry.go:31] will retry after 204.843324ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:42.524167    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:45:42.575827    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:45:42.575932    9631 retry.go:31] will retry after 349.460262ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:42.927614    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:45:42.980072    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:45:42.980183    9631 retry.go:31] will retry after 607.895423ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:43.588999    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:45:43.643216    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	W1107 16:45:43.643317    9631 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	
	W1107 16:45:43.643342    9631 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:43.643414    9631 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:45:43.643473    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:45:43.693668    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:45:43.693767    9631 retry.go:31] will retry after 200.505798ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:43.896012    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:45:43.949639    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:45:43.949735    9631 retry.go:31] will retry after 521.371009ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:44.471887    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:45:44.524481    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:45:44.524577    9631 retry.go:31] will retry after 393.709591ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:44.918583    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:45:44.974161    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	W1107 16:45:44.974264    9631 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	
	W1107 16:45:44.974282    9631 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:44.974299    9631 start.go:128] duration metric: createHost completed in 6m3.031192778s
	I1107 16:45:44.974308    9631 start.go:83] releasing machines lock for "docker-flags-907000", held for 6m3.031289242s
	W1107 16:45:44.974336    9631 start.go:691] error starting host: creating host: create host timed out in 360.000000 seconds
	I1107 16:45:44.974832    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:45:45.031142    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:45.031192    9631 delete.go:82] Unable to get host status for docker-flags-907000, assuming it has already been deleted: state: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	W1107 16:45:45.031272    9631 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1107 16:45:45.031283    9631 start.go:706] Will try again in 5 seconds ...
	I1107 16:45:50.031852    9631 start.go:365] acquiring machines lock for docker-flags-907000: {Name:mk22374b81cc5bbb19eb7a3156119787b2788e51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 16:45:50.031987    9631 start.go:369] acquired machines lock for "docker-flags-907000" in 102.155µs
	I1107 16:45:50.032014    9631 start.go:96] Skipping create...Using existing machine configuration
	I1107 16:45:50.032026    9631 fix.go:54] fixHost starting: 
	I1107 16:45:50.032362    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:45:50.084882    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:50.084947    9631 fix.go:102] recreateIfNeeded on docker-flags-907000: state= err=unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:50.084966    9631 fix.go:107] machineExists: false. err=machine does not exist
	I1107 16:45:50.106868    9631 out.go:177] * docker "docker-flags-907000" container is missing, will recreate.
	I1107 16:45:50.150336    9631 delete.go:124] DEMOLISHING docker-flags-907000 ...
	I1107 16:45:50.150559    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:45:50.201996    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	W1107 16:45:50.202051    9631 stop.go:75] unable to get state: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:50.202071    9631 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:50.202451    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:45:50.252012    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:50.252083    9631 delete.go:82] Unable to get host status for docker-flags-907000, assuming it has already been deleted: state: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:50.252166    9631 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-907000
	W1107 16:45:50.302318    9631 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-907000 returned with exit code 1
	I1107 16:45:50.302353    9631 kic.go:371] could not find the container docker-flags-907000 to remove it. will try anyways
	I1107 16:45:50.302431    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:45:50.353220    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	W1107 16:45:50.353274    9631 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:50.353362    9631 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-907000 /bin/bash -c "sudo init 0"
	W1107 16:45:50.403268    9631 cli_runner.go:211] docker exec --privileged -t docker-flags-907000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1107 16:45:50.403298    9631 oci.go:650] error shutdown docker-flags-907000: docker exec --privileged -t docker-flags-907000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:51.405727    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:45:51.458931    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:51.458977    9631 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:51.458991    9631 oci.go:664] temporary error: container docker-flags-907000 status is  but expect it to be exited
	I1107 16:45:51.459016    9631 retry.go:31] will retry after 522.8617ms: couldn't verify container is exited. %v: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:51.983451    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:45:52.037399    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:52.037449    9631 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:52.037458    9631 oci.go:664] temporary error: container docker-flags-907000 status is  but expect it to be exited
	I1107 16:45:52.037481    9631 retry.go:31] will retry after 600.083706ms: couldn't verify container is exited. %v: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:52.639836    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:45:52.694130    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:52.694177    9631 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:52.694187    9631 oci.go:664] temporary error: container docker-flags-907000 status is  but expect it to be exited
	I1107 16:45:52.694212    9631 retry.go:31] will retry after 1.125726517s: couldn't verify container is exited. %v: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:53.822139    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:45:53.875423    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:53.875476    9631 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:53.875488    9631 oci.go:664] temporary error: container docker-flags-907000 status is  but expect it to be exited
	I1107 16:45:53.875513    9631 retry.go:31] will retry after 1.699975319s: couldn't verify container is exited. %v: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:55.575734    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:45:55.628757    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:55.628803    9631 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:55.628814    9631 oci.go:664] temporary error: container docker-flags-907000 status is  but expect it to be exited
	I1107 16:45:55.628839    9631 retry.go:31] will retry after 2.157855243s: couldn't verify container is exited. %v: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:57.788187    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:45:57.841764    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:57.841817    9631 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:45:57.841825    9631 oci.go:664] temporary error: container docker-flags-907000 status is  but expect it to be exited
	I1107 16:45:57.841847    9631 retry.go:31] will retry after 2.703011713s: couldn't verify container is exited. %v: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:46:00.546433    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:46:00.601017    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	I1107 16:46:00.601064    9631 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:46:00.601075    9631 oci.go:664] temporary error: container docker-flags-907000 status is  but expect it to be exited
	I1107 16:46:00.601099    9631 retry.go:31] will retry after 3.133715354s: couldn't verify container is exited. %v: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:46:03.736243    9631 cli_runner.go:164] Run: docker container inspect docker-flags-907000 --format={{.State.Status}}
	W1107 16:46:03.790647    9631 cli_runner.go:211] docker container inspect docker-flags-907000 --format={{.State.Status}} returned with exit code 1
	I1107 16:46:03.790693    9631 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:46:03.790708    9631 oci.go:664] temporary error: container docker-flags-907000 status is  but expect it to be exited
	I1107 16:46:03.790737    9631 oci.go:88] couldn't shut down docker-flags-907000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	 
	I1107 16:46:03.790822    9631 cli_runner.go:164] Run: docker rm -f -v docker-flags-907000
	I1107 16:46:03.843851    9631 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-907000
	W1107 16:46:03.893377    9631 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-907000 returned with exit code 1
	I1107 16:46:03.893485    9631 cli_runner.go:164] Run: docker network inspect docker-flags-907000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:46:03.944062    9631 cli_runner.go:164] Run: docker network rm docker-flags-907000
	I1107 16:46:04.050220    9631 fix.go:114] Sleeping 1 second for extra luck!
	I1107 16:46:05.051925    9631 start.go:125] createHost starting for "" (driver="docker")
	I1107 16:46:05.075175    9631 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 16:46:05.075360    9631 start.go:159] libmachine.API.Create for "docker-flags-907000" (driver="docker")
	I1107 16:46:05.075395    9631 client.go:168] LocalClient.Create starting
	I1107 16:46:05.075662    9631 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 16:46:05.075756    9631 main.go:141] libmachine: Decoding PEM data...
	I1107 16:46:05.075780    9631 main.go:141] libmachine: Parsing certificate...
	I1107 16:46:05.075865    9631 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 16:46:05.075937    9631 main.go:141] libmachine: Decoding PEM data...
	I1107 16:46:05.075954    9631 main.go:141] libmachine: Parsing certificate...
	I1107 16:46:05.097627    9631 cli_runner.go:164] Run: docker network inspect docker-flags-907000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 16:46:05.150910    9631 cli_runner.go:211] docker network inspect docker-flags-907000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 16:46:05.151018    9631 network_create.go:281] running [docker network inspect docker-flags-907000] to gather additional debugging logs...
	I1107 16:46:05.151047    9631 cli_runner.go:164] Run: docker network inspect docker-flags-907000
	W1107 16:46:05.201277    9631 cli_runner.go:211] docker network inspect docker-flags-907000 returned with exit code 1
	I1107 16:46:05.201308    9631 network_create.go:284] error running [docker network inspect docker-flags-907000]: docker network inspect docker-flags-907000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-907000 not found
	I1107 16:46:05.201325    9631 network_create.go:286] output of [docker network inspect docker-flags-907000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-907000 not found
	
	** /stderr **
	I1107 16:46:05.201471    9631 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:46:05.253874    9631 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:46:05.255430    9631 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:46:05.256987    9631 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:46:05.258474    9631 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:46:05.260049    9631 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:46:05.260403    9631 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002345940}
	I1107 16:46:05.260426    9631 network_create.go:124] attempt to create docker network docker-flags-907000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I1107 16:46:05.260502    9631 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-907000 docker-flags-907000
	I1107 16:46:05.346153    9631 network_create.go:108] docker network docker-flags-907000 192.168.94.0/24 created
	I1107 16:46:05.346352    9631 kic.go:121] calculated static IP "192.168.94.2" for the "docker-flags-907000" container
	I1107 16:46:05.346468    9631 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 16:46:05.399842    9631 cli_runner.go:164] Run: docker volume create docker-flags-907000 --label name.minikube.sigs.k8s.io=docker-flags-907000 --label created_by.minikube.sigs.k8s.io=true
	I1107 16:46:05.449693    9631 oci.go:103] Successfully created a docker volume docker-flags-907000
	I1107 16:46:05.449807    9631 cli_runner.go:164] Run: docker run --rm --name docker-flags-907000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-907000 --entrypoint /usr/bin/test -v docker-flags-907000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 16:46:05.741789    9631 oci.go:107] Successfully prepared a docker volume docker-flags-907000
	I1107 16:46:05.741826    9631 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:46:05.741838    9631 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 16:46:05.741937    9631 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-907000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 16:52:05.061860    9631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:52:05.061980    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:05.113539    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:05.113647    9631 retry.go:31] will retry after 353.259837ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:05.469325    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:05.525208    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:05.525319    9631 retry.go:31] will retry after 395.925231ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:05.922571    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:05.975071    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:05.975170    9631 retry.go:31] will retry after 818.436457ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:06.795921    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:06.849302    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	W1107 16:52:06.849404    9631 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	
	W1107 16:52:06.849450    9631 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:06.849510    9631 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:52:06.849577    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:06.899279    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:06.899375    9631 retry.go:31] will retry after 374.65471ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:07.274790    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:07.328730    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:07.328831    9631 retry.go:31] will retry after 547.069724ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:07.878127    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:07.932558    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:07.932651    9631 retry.go:31] will retry after 664.013239ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:08.599081    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:08.651327    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	W1107 16:52:08.651445    9631 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	
	W1107 16:52:08.651463    9631 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:08.651470    9631 start.go:128] duration metric: createHost completed in 6m3.613860128s
	I1107 16:52:08.651531    9631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:52:08.651596    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:08.701945    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:08.702038    9631 retry.go:31] will retry after 315.796938ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:09.019499    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:09.074460    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:09.074553    9631 retry.go:31] will retry after 266.555309ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:09.343470    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:09.398206    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:09.398294    9631 retry.go:31] will retry after 514.751507ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:09.913987    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:09.967430    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:09.967525    9631 retry.go:31] will retry after 446.420126ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:10.415676    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:10.468894    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	W1107 16:52:10.469000    9631 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	
	W1107 16:52:10.469017    9631 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:10.469080    9631 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:52:10.469138    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:10.520274    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:10.520373    9631 retry.go:31] will retry after 284.207972ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:10.805152    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:10.858985    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:10.859076    9631 retry.go:31] will retry after 536.941476ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:11.397639    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:11.449890    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	I1107 16:52:11.449980    9631 retry.go:31] will retry after 391.137065ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:11.843519    9631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000
	W1107 16:52:11.897584    9631 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000 returned with exit code 1
	W1107 16:52:11.897684    9631 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	
	W1107 16:52:11.897707    9631 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-907000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-907000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	I1107 16:52:11.897719    9631 fix.go:56] fixHost completed within 6m21.880760576s
	I1107 16:52:11.897728    9631 start.go:83] releasing machines lock for "docker-flags-907000", held for 6m21.880797546s
	W1107 16:52:11.897810    9631 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-907000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-907000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1107 16:52:11.941212    9631 out.go:177] 
	W1107 16:52:11.962193    9631 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1107 16:52:11.962247    9631 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1107 16:52:11.962273    9631 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1107 16:52:11.984119    9631 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-907000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-907000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-907000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (201.507354ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_ssh_45ab9b4ee43b1ccee1cc1cad42a504b375b49bd8_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-907000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-907000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-907000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (202.922123ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_ssh_0c4d48d3465e4cc08ca5bd2bd06b407509a1612b_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-907000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-907000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-11-07 16:52:12.464138 -0800 PST m=+6711.483164424
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-907000
helpers_test.go:235: (dbg) docker inspect docker-flags-907000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-907000",
	        "Id": "ef0f08ce11dab4839a6d0099d202369fb17ca48ca8f2c12afcd4b91e9f19c2e1",
	        "Created": "2023-11-08T00:46:05.307190716Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-907000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-907000 -n docker-flags-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-907000 -n docker-flags-907000: exit status 7 (106.593697ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 16:52:12.623193   10171 status.go:249] status error: host: state: unknown state "docker-flags-907000": docker container inspect docker-flags-907000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-907000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-907000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-907000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-907000
--- FAIL: TestDockerFlags (752.24s)

                                                
                                    
x
+
TestForceSystemdFlag (754.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-919000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-919000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m33.135017451s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-919000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-flag-919000 in cluster force-systemd-flag-919000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-919000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 16:39:17.539196    9517 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:39:17.539404    9517 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:39:17.539411    9517 out.go:309] Setting ErrFile to fd 2...
	I1107 16:39:17.539415    9517 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:39:17.539591    9517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 16:39:17.541038    9517 out.go:303] Setting JSON to false
	I1107 16:39:17.563737    9517 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":7731,"bootTime":1699396226,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1107 16:39:17.563844    9517 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1107 16:39:17.585160    9517 out.go:177] * [force-systemd-flag-919000] minikube v1.32.0 on Darwin 14.1
	I1107 16:39:17.607208    9517 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 16:39:17.607269    9517 notify.go:220] Checking for updates...
	I1107 16:39:17.650788    9517 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	I1107 16:39:17.672058    9517 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 16:39:17.692931    9517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:39:17.713960    9517 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	I1107 16:39:17.735057    9517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 16:39:17.756263    9517 config.go:182] Loaded profile config "force-systemd-env-582000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 16:39:17.756351    9517 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 16:39:17.813124    9517 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.0 (126437)
	I1107 16:39:17.813257    9517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:39:17.912794    9517 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:false NGoroutines:188 SystemTime:2023-11-08 00:39:17.903116525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescript
ion:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription
:Docker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 16:39:17.955174    9517 out.go:177] * Using the docker driver based on user configuration
	I1107 16:39:17.975983    9517 start.go:298] selected driver: docker
	I1107 16:39:17.976018    9517 start.go:902] validating driver "docker" against <nil>
	I1107 16:39:17.976032    9517 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 16:39:17.980363    9517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:39:18.081103    9517 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:false NGoroutines:188 SystemTime:2023-11-08 00:39:18.07224725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescripti
on:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 16:39:18.081251    9517 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 16:39:18.081466    9517 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 16:39:18.102579    9517 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 16:39:18.123795    9517 cni.go:84] Creating CNI manager for ""
	I1107 16:39:18.123841    9517 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1107 16:39:18.123861    9517 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1107 16:39:18.123886    9517 start_flags.go:323] config:
	{Name:force-systemd-flag-919000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:force-systemd-flag-919000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 16:39:18.145626    9517 out.go:177] * Starting control plane node force-systemd-flag-919000 in cluster force-systemd-flag-919000
	I1107 16:39:18.167497    9517 cache.go:121] Beginning downloading kic base image for docker with docker
	I1107 16:39:18.188724    9517 out.go:177] * Pulling base image ...
	I1107 16:39:18.230543    9517 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:39:18.230617    9517 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1107 16:39:18.230639    9517 cache.go:56] Caching tarball of preloaded images
	I1107 16:39:18.230650    9517 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 16:39:18.230852    9517 preload.go:174] Found /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 16:39:18.230873    9517 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1107 16:39:18.231027    9517 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/force-systemd-flag-919000/config.json ...
	I1107 16:39:18.231103    9517 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/force-systemd-flag-919000/config.json: {Name:mk841ba350a71bb85b122623db4c8619dd67a686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 16:39:18.282394    9517 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 16:39:18.282421    9517 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 16:39:18.282438    9517 cache.go:194] Successfully downloaded all kic artifacts
	I1107 16:39:18.282480    9517 start.go:365] acquiring machines lock for force-systemd-flag-919000: {Name:mka83e46cd296eb1b3130c9ebda2b39bc76b0875 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 16:39:18.282628    9517 start.go:369] acquired machines lock for "force-systemd-flag-919000" in 133.383µs
	I1107 16:39:18.282662    9517 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-919000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:force-systemd-flag-919000 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 16:39:18.282729    9517 start.go:125] createHost starting for "" (driver="docker")
	I1107 16:39:18.324544    9517 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 16:39:18.324920    9517 start.go:159] libmachine.API.Create for "force-systemd-flag-919000" (driver="docker")
	I1107 16:39:18.324970    9517 client.go:168] LocalClient.Create starting
	I1107 16:39:18.325152    9517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 16:39:18.325242    9517 main.go:141] libmachine: Decoding PEM data...
	I1107 16:39:18.325278    9517 main.go:141] libmachine: Parsing certificate...
	I1107 16:39:18.325384    9517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 16:39:18.325453    9517 main.go:141] libmachine: Decoding PEM data...
	I1107 16:39:18.325470    9517 main.go:141] libmachine: Parsing certificate...
	I1107 16:39:18.326193    9517 cli_runner.go:164] Run: docker network inspect force-systemd-flag-919000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 16:39:18.376787    9517 cli_runner.go:211] docker network inspect force-systemd-flag-919000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 16:39:18.376892    9517 network_create.go:281] running [docker network inspect force-systemd-flag-919000] to gather additional debugging logs...
	I1107 16:39:18.376915    9517 cli_runner.go:164] Run: docker network inspect force-systemd-flag-919000
	W1107 16:39:18.427187    9517 cli_runner.go:211] docker network inspect force-systemd-flag-919000 returned with exit code 1
	I1107 16:39:18.427217    9517 network_create.go:284] error running [docker network inspect force-systemd-flag-919000]: docker network inspect force-systemd-flag-919000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-919000 not found
	I1107 16:39:18.427231    9517 network_create.go:286] output of [docker network inspect force-systemd-flag-919000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-919000 not found
	
	** /stderr **
	I1107 16:39:18.427406    9517 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:39:18.479505    9517 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:39:18.479897    9517 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000997d70}
	I1107 16:39:18.479914    9517 network_create.go:124] attempt to create docker network force-systemd-flag-919000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1107 16:39:18.479993    9517 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-919000 force-systemd-flag-919000
	I1107 16:39:18.566267    9517 network_create.go:108] docker network force-systemd-flag-919000 192.168.58.0/24 created
	I1107 16:39:18.566319    9517 kic.go:121] calculated static IP "192.168.58.2" for the "force-systemd-flag-919000" container
	I1107 16:39:18.566428    9517 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 16:39:18.619207    9517 cli_runner.go:164] Run: docker volume create force-systemd-flag-919000 --label name.minikube.sigs.k8s.io=force-systemd-flag-919000 --label created_by.minikube.sigs.k8s.io=true
	I1107 16:39:18.670799    9517 oci.go:103] Successfully created a docker volume force-systemd-flag-919000
	I1107 16:39:18.670923    9517 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-919000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-919000 --entrypoint /usr/bin/test -v force-systemd-flag-919000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 16:39:19.059187    9517 oci.go:107] Successfully prepared a docker volume force-systemd-flag-919000
	I1107 16:39:19.059228    9517 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:39:19.059243    9517 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 16:39:19.059344    9517 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-919000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 16:45:18.311372    9517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:45:18.311511    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:45:18.366946    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:45:18.367087    9517 retry.go:31] will retry after 126.688406ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:18.495109    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:45:18.547041    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:45:18.547156    9517 retry.go:31] will retry after 373.11789ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:18.920707    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:45:18.975384    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:45:18.975475    9517 retry.go:31] will retry after 749.484291ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:19.726601    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:45:19.779952    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	W1107 16:45:19.780064    9517 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	
	W1107 16:45:19.780084    9517 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:19.780147    9517 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:45:19.780206    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:45:19.831536    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:45:19.831642    9517 retry.go:31] will retry after 365.238154ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:20.197457    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:45:20.250966    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:45:20.251053    9517 retry.go:31] will retry after 219.1533ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:20.472582    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:45:20.527632    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:45:20.527717    9517 retry.go:31] will retry after 518.055137ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:21.046700    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:45:21.100627    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:45:21.100723    9517 retry.go:31] will retry after 655.902386ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:21.757087    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:45:21.811861    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	W1107 16:45:21.811963    9517 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	
	W1107 16:45:21.811985    9517 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:21.812002    9517 start.go:128] duration metric: createHost completed in 6m3.543605089s
	I1107 16:45:21.812012    9517 start.go:83] releasing machines lock for "force-systemd-flag-919000", held for 6m3.543719039s
	W1107 16:45:21.812025    9517 start.go:691] error starting host: creating host: create host timed out in 360.000000 seconds
	I1107 16:45:21.812472    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:21.863098    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:21.863142    9517 delete.go:82] Unable to get host status for force-systemd-flag-919000, assuming it has already been deleted: state: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	W1107 16:45:21.863221    9517 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1107 16:45:21.863232    9517 start.go:706] Will try again in 5 seconds ...
	I1107 16:45:26.865262    9517 start.go:365] acquiring machines lock for force-systemd-flag-919000: {Name:mka83e46cd296eb1b3130c9ebda2b39bc76b0875 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 16:45:26.866185    9517 start.go:369] acquired machines lock for "force-systemd-flag-919000" in 854.401µs
	I1107 16:45:26.866306    9517 start.go:96] Skipping create...Using existing machine configuration
	I1107 16:45:26.866325    9517 fix.go:54] fixHost starting: 
	I1107 16:45:26.866843    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:26.922793    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:26.922836    9517 fix.go:102] recreateIfNeeded on force-systemd-flag-919000: state= err=unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:26.922857    9517 fix.go:107] machineExists: false. err=machine does not exist
	I1107 16:45:26.943962    9517 out.go:177] * docker "force-systemd-flag-919000" container is missing, will recreate.
	I1107 16:45:26.987549    9517 delete.go:124] DEMOLISHING force-systemd-flag-919000 ...
	I1107 16:45:26.987732    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:27.038730    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	W1107 16:45:27.038788    9517 stop.go:75] unable to get state: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:27.038807    9517 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:27.039186    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:27.088904    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:27.088959    9517 delete.go:82] Unable to get host status for force-systemd-flag-919000, assuming it has already been deleted: state: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:27.089045    9517 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-919000
	W1107 16:45:27.138950    9517 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-919000 returned with exit code 1
	I1107 16:45:27.139009    9517 kic.go:371] could not find the container force-systemd-flag-919000 to remove it. will try anyways
	I1107 16:45:27.139105    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:27.188923    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	W1107 16:45:27.188976    9517 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:27.189061    9517 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-919000 /bin/bash -c "sudo init 0"
	W1107 16:45:27.239192    9517 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-919000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1107 16:45:27.239229    9517 oci.go:650] error shutdown force-systemd-flag-919000: docker exec --privileged -t force-systemd-flag-919000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:28.240785    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:28.294679    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:28.294736    9517 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:28.294749    9517 oci.go:664] temporary error: container force-systemd-flag-919000 status is  but expect it to be exited
	I1107 16:45:28.294772    9517 retry.go:31] will retry after 392.51803ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:28.689591    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:28.744672    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:28.744720    9517 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:28.744730    9517 oci.go:664] temporary error: container force-systemd-flag-919000 status is  but expect it to be exited
	I1107 16:45:28.744768    9517 retry.go:31] will retry after 885.305921ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:29.630530    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:29.683403    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:29.683456    9517 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:29.683477    9517 oci.go:664] temporary error: container force-systemd-flag-919000 status is  but expect it to be exited
	I1107 16:45:29.683509    9517 retry.go:31] will retry after 691.151543ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:30.377015    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:30.430220    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:30.430276    9517 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:30.430291    9517 oci.go:664] temporary error: container force-systemd-flag-919000 status is  but expect it to be exited
	I1107 16:45:30.430314    9517 retry.go:31] will retry after 2.114125671s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:32.546698    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:32.600192    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:32.600240    9517 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:32.600253    9517 oci.go:664] temporary error: container force-systemd-flag-919000 status is  but expect it to be exited
	I1107 16:45:32.600276    9517 retry.go:31] will retry after 2.957554675s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:35.560097    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:35.611321    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:35.611371    9517 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:35.611380    9517 oci.go:664] temporary error: container force-systemd-flag-919000 status is  but expect it to be exited
	I1107 16:45:35.611402    9517 retry.go:31] will retry after 2.340229686s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:37.953754    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:38.009278    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:38.009322    9517 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:38.009336    9517 oci.go:664] temporary error: container force-systemd-flag-919000 status is  but expect it to be exited
	I1107 16:45:38.009364    9517 retry.go:31] will retry after 4.891819505s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:42.902598    9517 cli_runner.go:164] Run: docker container inspect force-systemd-flag-919000 --format={{.State.Status}}
	W1107 16:45:42.955848    9517 cli_runner.go:211] docker container inspect force-systemd-flag-919000 --format={{.State.Status}} returned with exit code 1
	I1107 16:45:42.955906    9517 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:45:42.955922    9517 oci.go:664] temporary error: container force-systemd-flag-919000 status is  but expect it to be exited
	I1107 16:45:42.955951    9517 oci.go:88] couldn't shut down force-systemd-flag-919000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	 
	I1107 16:45:42.956046    9517 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-919000
	I1107 16:45:43.008387    9517 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-919000
	W1107 16:45:43.058663    9517 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-919000 returned with exit code 1
	I1107 16:45:43.058783    9517 cli_runner.go:164] Run: docker network inspect force-systemd-flag-919000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:45:43.109218    9517 cli_runner.go:164] Run: docker network rm force-systemd-flag-919000
	I1107 16:45:43.209812    9517 fix.go:114] Sleeping 1 second for extra luck!
	I1107 16:45:44.211964    9517 start.go:125] createHost starting for "" (driver="docker")
	I1107 16:45:44.237458    9517 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 16:45:44.237669    9517 start.go:159] libmachine.API.Create for "force-systemd-flag-919000" (driver="docker")
	I1107 16:45:44.237707    9517 client.go:168] LocalClient.Create starting
	I1107 16:45:44.237936    9517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 16:45:44.238046    9517 main.go:141] libmachine: Decoding PEM data...
	I1107 16:45:44.238088    9517 main.go:141] libmachine: Parsing certificate...
	I1107 16:45:44.238183    9517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 16:45:44.238270    9517 main.go:141] libmachine: Decoding PEM data...
	I1107 16:45:44.238288    9517 main.go:141] libmachine: Parsing certificate...
	I1107 16:45:44.259311    9517 cli_runner.go:164] Run: docker network inspect force-systemd-flag-919000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 16:45:44.329701    9517 cli_runner.go:211] docker network inspect force-systemd-flag-919000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 16:45:44.329844    9517 network_create.go:281] running [docker network inspect force-systemd-flag-919000] to gather additional debugging logs...
	I1107 16:45:44.329887    9517 cli_runner.go:164] Run: docker network inspect force-systemd-flag-919000
	W1107 16:45:44.386733    9517 cli_runner.go:211] docker network inspect force-systemd-flag-919000 returned with exit code 1
	I1107 16:45:44.386762    9517 network_create.go:284] error running [docker network inspect force-systemd-flag-919000]: docker network inspect force-systemd-flag-919000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-919000 not found
	I1107 16:45:44.386777    9517 network_create.go:286] output of [docker network inspect force-systemd-flag-919000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-919000 not found
	
	** /stderr **
	I1107 16:45:44.386913    9517 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:45:44.439301    9517 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:45:44.440773    9517 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:45:44.442272    9517 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:45:44.443942    9517 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:45:44.444302    9517 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002371cd0}
	I1107 16:45:44.444315    9517 network_create.go:124] attempt to create docker network force-systemd-flag-919000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I1107 16:45:44.444382    9517 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-919000 force-systemd-flag-919000
	I1107 16:45:44.533072    9517 network_create.go:108] docker network force-systemd-flag-919000 192.168.85.0/24 created
	I1107 16:45:44.533114    9517 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-919000" container
	I1107 16:45:44.533241    9517 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 16:45:44.586656    9517 cli_runner.go:164] Run: docker volume create force-systemd-flag-919000 --label name.minikube.sigs.k8s.io=force-systemd-flag-919000 --label created_by.minikube.sigs.k8s.io=true
	I1107 16:45:44.636925    9517 oci.go:103] Successfully created a docker volume force-systemd-flag-919000
	I1107 16:45:44.637053    9517 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-919000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-919000 --entrypoint /usr/bin/test -v force-systemd-flag-919000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 16:45:44.933675    9517 oci.go:107] Successfully prepared a docker volume force-systemd-flag-919000
	I1107 16:45:44.933711    9517 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:45:44.933722    9517 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 16:45:44.933833    9517 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-919000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 16:51:44.223850    9517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:51:44.223942    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:44.277503    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:44.277627    9517 retry.go:31] will retry after 142.323411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:44.422377    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:44.477365    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:44.477461    9517 retry.go:31] will retry after 252.840909ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:44.730741    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:44.782308    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:44.782430    9517 retry.go:31] will retry after 612.239433ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:45.395000    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:45.447586    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	W1107 16:51:45.447694    9517 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	
	W1107 16:51:45.447711    9517 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:45.447771    9517 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:51:45.447830    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:45.498141    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:45.498241    9517 retry.go:31] will retry after 368.735619ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:45.868089    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:45.921788    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:45.921889    9517 retry.go:31] will retry after 291.343571ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:46.214129    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:46.264986    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:46.265091    9517 retry.go:31] will retry after 410.66074ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:46.677901    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:46.732424    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	W1107 16:51:46.732525    9517 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	
	W1107 16:51:46.732539    9517 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:46.732551    9517 start.go:128] duration metric: createHost completed in 6m2.534831591s
	I1107 16:51:46.732625    9517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:51:46.732676    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:46.782544    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:46.782638    9517 retry.go:31] will retry after 197.862111ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:46.980885    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:47.035593    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:47.035681    9517 retry.go:31] will retry after 474.790253ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:47.511074    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:47.585588    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:47.585682    9517 retry.go:31] will retry after 293.202507ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:47.879620    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:47.934412    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:47.934507    9517 retry.go:31] will retry after 643.482843ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:48.579644    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:48.632097    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	W1107 16:51:48.632216    9517 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	
	W1107 16:51:48.632241    9517 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:48.632299    9517 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:51:48.632379    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:48.682413    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:48.682513    9517 retry.go:31] will retry after 349.832745ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:49.032705    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:49.087593    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:49.087689    9517 retry.go:31] will retry after 517.865372ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:49.607970    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:49.660808    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	I1107 16:51:49.660901    9517 retry.go:31] will retry after 722.171237ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:50.385472    9517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000
	W1107 16:51:50.440471    9517 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000 returned with exit code 1
	W1107 16:51:50.440575    9517 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	
	W1107 16:51:50.440595    9517 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-919000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-919000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	I1107 16:51:50.440609    9517 fix.go:56] fixHost completed within 6m23.589420924s
	I1107 16:51:50.440618    9517 start.go:83] releasing machines lock for "force-systemd-flag-919000", held for 6m23.589488052s
	W1107 16:51:50.440703    9517 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-919000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-919000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1107 16:51:50.484145    9517 out.go:177] 
	W1107 16:51:50.505353    9517 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1107 16:51:50.505420    9517 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1107 16:51:50.505484    9517 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1107 16:51:50.527287    9517 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-919000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-919000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-919000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (198.900781ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-919000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-11-07 16:51:50.801911 -0800 PST m=+6689.820083433
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-919000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-919000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-919000",
	        "Id": "597f51b923d02da3d0bd78a794fa204ca5328f495089a8c27c88f004a1d74849",
	        "Created": "2023-11-08T00:45:44.492276789Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-919000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-919000 -n force-systemd-flag-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-919000 -n force-systemd-flag-919000: exit status 7 (106.736288ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 16:51:50.961376   10052 status.go:249] status error: host: state: unknown state "force-systemd-flag-919000": docker container inspect force-systemd-flag-919000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-919000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-919000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-919000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-919000
--- FAIL: TestForceSystemdFlag (754.25s)

                                                
                                    
x
+
TestForceSystemdEnv (751.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-582000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E1107 16:28:43.856386    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:29:06.654278    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 16:31:46.856738    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:33:43.789496    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:34:06.642544    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 16:37:09.694790    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 16:38:43.777760    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:39:06.629088    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-582000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m30.064240652s)

                                                
                                                
-- stdout --
	* [force-systemd-env-582000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-env-582000 in cluster force-systemd-env-582000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-582000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 16:27:10.107921    9085 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:27:10.108111    9085 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:27:10.108116    9085 out.go:309] Setting ErrFile to fd 2...
	I1107 16:27:10.108120    9085 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:27:10.108303    9085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 16:27:10.109708    9085 out.go:303] Setting JSON to false
	I1107 16:27:10.132112    9085 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":7004,"bootTime":1699396226,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1107 16:27:10.132205    9085 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1107 16:27:10.153579    9085 out.go:177] * [force-systemd-env-582000] minikube v1.32.0 on Darwin 14.1
	I1107 16:27:10.195191    9085 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 16:27:10.195310    9085 notify.go:220] Checking for updates...
	I1107 16:27:10.217356    9085 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	I1107 16:27:10.238002    9085 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 16:27:10.259103    9085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:27:10.280308    9085 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	I1107 16:27:10.301054    9085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1107 16:27:10.323030    9085 config.go:182] Loaded profile config "offline-docker-081000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 16:27:10.323177    9085 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 16:27:10.379691    9085 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.0 (126437)
	I1107 16:27:10.379832    9085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:27:10.477249    9085 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:158 SystemTime:2023-11-08 00:27:10.468228473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescripti
on:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 16:27:10.519172    9085 out.go:177] * Using the docker driver based on user configuration
	I1107 16:27:10.540222    9085 start.go:298] selected driver: docker
	I1107 16:27:10.540242    9085 start.go:902] validating driver "docker" against <nil>
	I1107 16:27:10.540252    9085 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 16:27:10.543595    9085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:27:10.641022    9085 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:158 SystemTime:2023-11-08 00:27:10.631506181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescripti
on:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 16:27:10.641192    9085 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 16:27:10.641369    9085 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 16:27:10.662609    9085 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 16:27:10.683663    9085 cni.go:84] Creating CNI manager for ""
	I1107 16:27:10.683704    9085 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1107 16:27:10.683718    9085 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1107 16:27:10.683736    9085 start_flags.go:323] config:
	{Name:force-systemd-env-582000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:force-systemd-env-582000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 16:27:10.705658    9085 out.go:177] * Starting control plane node force-systemd-env-582000 in cluster force-systemd-env-582000
	I1107 16:27:10.747599    9085 cache.go:121] Beginning downloading kic base image for docker with docker
	I1107 16:27:10.768787    9085 out.go:177] * Pulling base image ...
	I1107 16:27:10.811783    9085 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:27:10.811853    9085 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1107 16:27:10.811892    9085 cache.go:56] Caching tarball of preloaded images
	I1107 16:27:10.811892    9085 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 16:27:10.812103    9085 preload.go:174] Found /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 16:27:10.812123    9085 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1107 16:27:10.812330    9085 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/force-systemd-env-582000/config.json ...
	I1107 16:27:10.813043    9085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/force-systemd-env-582000/config.json: {Name:mk993f7b8551715062bcf405ffb84424dfcf6577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 16:27:10.864095    9085 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 16:27:10.864113    9085 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 16:27:10.864150    9085 cache.go:194] Successfully downloaded all kic artifacts
	I1107 16:27:10.864208    9085 start.go:365] acquiring machines lock for force-systemd-env-582000: {Name:mk26e031a03417a9383e80100b18dd2dcff146c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 16:27:10.864377    9085 start.go:369] acquired machines lock for "force-systemd-env-582000" in 154.1µs
	I1107 16:27:10.864401    9085 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-582000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:force-systemd-env-582000 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 16:27:10.864451    9085 start.go:125] createHost starting for "" (driver="docker")
	I1107 16:27:10.906522    9085 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 16:27:10.906966    9085 start.go:159] libmachine.API.Create for "force-systemd-env-582000" (driver="docker")
	I1107 16:27:10.907034    9085 client.go:168] LocalClient.Create starting
	I1107 16:27:10.907211    9085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 16:27:10.907299    9085 main.go:141] libmachine: Decoding PEM data...
	I1107 16:27:10.907341    9085 main.go:141] libmachine: Parsing certificate...
	I1107 16:27:10.907455    9085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 16:27:10.907525    9085 main.go:141] libmachine: Decoding PEM data...
	I1107 16:27:10.907541    9085 main.go:141] libmachine: Parsing certificate...
	I1107 16:27:10.908585    9085 cli_runner.go:164] Run: docker network inspect force-systemd-env-582000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 16:27:10.959802    9085 cli_runner.go:211] docker network inspect force-systemd-env-582000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 16:27:10.959905    9085 network_create.go:281] running [docker network inspect force-systemd-env-582000] to gather additional debugging logs...
	I1107 16:27:10.959920    9085 cli_runner.go:164] Run: docker network inspect force-systemd-env-582000
	W1107 16:27:11.009878    9085 cli_runner.go:211] docker network inspect force-systemd-env-582000 returned with exit code 1
	I1107 16:27:11.009901    9085 network_create.go:284] error running [docker network inspect force-systemd-env-582000]: docker network inspect force-systemd-env-582000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-582000 not found
	I1107 16:27:11.009913    9085 network_create.go:286] output of [docker network inspect force-systemd-env-582000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-582000 not found
	
	** /stderr **
	I1107 16:27:11.010075    9085 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:27:11.061974    9085 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:27:11.063590    9085 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:27:11.063967    9085 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002153170}
	I1107 16:27:11.063983    9085 network_create.go:124] attempt to create docker network force-systemd-env-582000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1107 16:27:11.064070    9085 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-582000 force-systemd-env-582000
	W1107 16:27:11.113948    9085 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-582000 force-systemd-env-582000 returned with exit code 1
	W1107 16:27:11.113983    9085 network_create.go:149] failed to create docker network force-systemd-env-582000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-582000 force-systemd-env-582000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1107 16:27:11.113999    9085 network_create.go:116] failed to create docker network force-systemd-env-582000 192.168.67.0/24, will retry: subnet is taken
	I1107 16:27:11.115380    9085 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:27:11.115742    9085 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00223b960}
	I1107 16:27:11.115752    9085 network_create.go:124] attempt to create docker network force-systemd-env-582000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1107 16:27:11.115822    9085 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-582000 force-systemd-env-582000
	I1107 16:27:11.201278    9085 network_create.go:108] docker network force-systemd-env-582000 192.168.76.0/24 created
	I1107 16:27:11.201332    9085 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-582000" container
	I1107 16:27:11.201447    9085 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 16:27:11.254566    9085 cli_runner.go:164] Run: docker volume create force-systemd-env-582000 --label name.minikube.sigs.k8s.io=force-systemd-env-582000 --label created_by.minikube.sigs.k8s.io=true
	I1107 16:27:11.306063    9085 oci.go:103] Successfully created a docker volume force-systemd-env-582000
	I1107 16:27:11.306176    9085 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-582000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-582000 --entrypoint /usr/bin/test -v force-systemd-env-582000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 16:27:11.765444    9085 oci.go:107] Successfully prepared a docker volume force-systemd-env-582000
	I1107 16:27:11.765485    9085 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:27:11.765497    9085 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 16:27:11.765621    9085 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-582000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 16:33:10.843868    9085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:33:10.844056    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:33:10.897517    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:33:10.897654    9085 retry.go:31] will retry after 367.919994ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:11.267293    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:33:11.318241    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:33:11.318348    9085 retry.go:31] will retry after 211.381605ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:11.529962    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:33:11.581171    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:33:11.581260    9085 retry.go:31] will retry after 308.887547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:11.891395    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:33:11.943098    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	W1107 16:33:11.943202    9085 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	
	W1107 16:33:11.943223    9085 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:11.943275    9085 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:33:11.943328    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:33:11.993481    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:33:11.993567    9085 retry.go:31] will retry after 138.58391ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:12.134521    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:33:12.186196    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:33:12.186291    9085 retry.go:31] will retry after 202.978552ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:12.389522    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:33:12.440149    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:33:12.440234    9085 retry.go:31] will retry after 412.441521ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:12.854436    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:33:12.908418    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	W1107 16:33:12.908514    9085 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	
	W1107 16:33:12.908537    9085 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:12.908547    9085 start.go:128] duration metric: createHost completed in 6m2.109741095s
	I1107 16:33:12.908557    9085 start.go:83] releasing machines lock for "force-systemd-env-582000", held for 6m2.109828872s
	W1107 16:33:12.908570    9085 start.go:691] error starting host: creating host: create host timed out in 360.000000 seconds
	I1107 16:33:12.908977    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:12.960308    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:12.960361    9085 delete.go:82] Unable to get host status for force-systemd-env-582000, assuming it has already been deleted: state: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	W1107 16:33:12.960448    9085 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1107 16:33:12.960462    9085 start.go:706] Will try again in 5 seconds ...
	I1107 16:33:17.961230    9085 start.go:365] acquiring machines lock for force-systemd-env-582000: {Name:mk26e031a03417a9383e80100b18dd2dcff146c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 16:33:17.961465    9085 start.go:369] acquired machines lock for "force-systemd-env-582000" in 188.644µs
	I1107 16:33:17.961503    9085 start.go:96] Skipping create...Using existing machine configuration
	I1107 16:33:17.961518    9085 fix.go:54] fixHost starting: 
	I1107 16:33:17.961966    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:18.013499    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:18.013540    9085 fix.go:102] recreateIfNeeded on force-systemd-env-582000: state= err=unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:18.013555    9085 fix.go:107] machineExists: false. err=machine does not exist
	I1107 16:33:18.035377    9085 out.go:177] * docker "force-systemd-env-582000" container is missing, will recreate.
	I1107 16:33:18.078781    9085 delete.go:124] DEMOLISHING force-systemd-env-582000 ...
	I1107 16:33:18.078916    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:18.129521    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	W1107 16:33:18.129560    9085 stop.go:75] unable to get state: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:18.129577    9085 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:18.129939    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:18.178654    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:18.178725    9085 delete.go:82] Unable to get host status for force-systemd-env-582000, assuming it has already been deleted: state: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:18.178813    9085 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-582000
	W1107 16:33:18.228523    9085 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-582000 returned with exit code 1
	I1107 16:33:18.228576    9085 kic.go:371] could not find the container force-systemd-env-582000 to remove it. will try anyways
	I1107 16:33:18.228663    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:18.278666    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	W1107 16:33:18.278715    9085 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:18.278789    9085 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-582000 /bin/bash -c "sudo init 0"
	W1107 16:33:18.328860    9085 cli_runner.go:211] docker exec --privileged -t force-systemd-env-582000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1107 16:33:18.328897    9085 oci.go:650] error shutdown force-systemd-env-582000: docker exec --privileged -t force-systemd-env-582000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:19.329423    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:19.382855    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:19.382913    9085 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:19.382925    9085 oci.go:664] temporary error: container force-systemd-env-582000 status is  but expect it to be exited
	I1107 16:33:19.382949    9085 retry.go:31] will retry after 502.503936ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:19.885744    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:19.941173    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:19.941221    9085 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:19.941234    9085 oci.go:664] temporary error: container force-systemd-env-582000 status is  but expect it to be exited
	I1107 16:33:19.941259    9085 retry.go:31] will retry after 836.428083ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:20.780119    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:20.835229    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:20.835286    9085 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:20.835296    9085 oci.go:664] temporary error: container force-systemd-env-582000 status is  but expect it to be exited
	I1107 16:33:20.835320    9085 retry.go:31] will retry after 593.127317ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:21.430826    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:21.484313    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:21.484361    9085 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:21.484371    9085 oci.go:664] temporary error: container force-systemd-env-582000 status is  but expect it to be exited
	I1107 16:33:21.484397    9085 retry.go:31] will retry after 945.202522ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:22.430073    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:22.484432    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:22.484479    9085 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:22.484498    9085 oci.go:664] temporary error: container force-systemd-env-582000 status is  but expect it to be exited
	I1107 16:33:22.484521    9085 retry.go:31] will retry after 3.732916513s: couldn't verify container is exited. %v: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:26.219653    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:26.274080    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:26.274126    9085 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:26.274135    9085 oci.go:664] temporary error: container force-systemd-env-582000 status is  but expect it to be exited
	I1107 16:33:26.274160    9085 retry.go:31] will retry after 2.627096151s: couldn't verify container is exited. %v: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:28.902951    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:28.956428    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:28.956477    9085 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:28.956487    9085 oci.go:664] temporary error: container force-systemd-env-582000 status is  but expect it to be exited
	I1107 16:33:28.956510    9085 retry.go:31] will retry after 3.230603763s: couldn't verify container is exited. %v: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:32.187178    9085 cli_runner.go:164] Run: docker container inspect force-systemd-env-582000 --format={{.State.Status}}
	W1107 16:33:32.237497    9085 cli_runner.go:211] docker container inspect force-systemd-env-582000 --format={{.State.Status}} returned with exit code 1
	I1107 16:33:32.237549    9085 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:33:32.237564    9085 oci.go:664] temporary error: container force-systemd-env-582000 status is  but expect it to be exited
	I1107 16:33:32.237596    9085 oci.go:88] couldn't shut down force-systemd-env-582000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	 
	I1107 16:33:32.237679    9085 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-582000
	I1107 16:33:32.288286    9085 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-582000
	W1107 16:33:32.338118    9085 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-582000 returned with exit code 1
	I1107 16:33:32.338240    9085 cli_runner.go:164] Run: docker network inspect force-systemd-env-582000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:33:32.388206    9085 cli_runner.go:164] Run: docker network rm force-systemd-env-582000
	I1107 16:33:32.488098    9085 fix.go:114] Sleeping 1 second for extra luck!
	I1107 16:33:33.488614    9085 start.go:125] createHost starting for "" (driver="docker")
	I1107 16:33:33.511496    9085 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1107 16:33:33.511719    9085 start.go:159] libmachine.API.Create for "force-systemd-env-582000" (driver="docker")
	I1107 16:33:33.511761    9085 client.go:168] LocalClient.Create starting
	I1107 16:33:33.511981    9085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 16:33:33.512076    9085 main.go:141] libmachine: Decoding PEM data...
	I1107 16:33:33.512102    9085 main.go:141] libmachine: Parsing certificate...
	I1107 16:33:33.512191    9085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 16:33:33.512273    9085 main.go:141] libmachine: Decoding PEM data...
	I1107 16:33:33.512290    9085 main.go:141] libmachine: Parsing certificate...
	I1107 16:33:33.534084    9085 cli_runner.go:164] Run: docker network inspect force-systemd-env-582000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 16:33:33.586973    9085 cli_runner.go:211] docker network inspect force-systemd-env-582000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 16:33:33.587068    9085 network_create.go:281] running [docker network inspect force-systemd-env-582000] to gather additional debugging logs...
	I1107 16:33:33.587086    9085 cli_runner.go:164] Run: docker network inspect force-systemd-env-582000
	W1107 16:33:33.637154    9085 cli_runner.go:211] docker network inspect force-systemd-env-582000 returned with exit code 1
	I1107 16:33:33.637185    9085 network_create.go:284] error running [docker network inspect force-systemd-env-582000]: docker network inspect force-systemd-env-582000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-582000 not found
	I1107 16:33:33.637198    9085 network_create.go:286] output of [docker network inspect force-systemd-env-582000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-582000 not found
	
	** /stderr **
	I1107 16:33:33.637328    9085 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:33:33.689266    9085 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:33:33.690809    9085 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:33:33.692244    9085 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:33:33.693849    9085 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:33:33.695575    9085 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:33:33.696220    9085 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00223ea20}
	I1107 16:33:33.696255    9085 network_create.go:124] attempt to create docker network force-systemd-env-582000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I1107 16:33:33.696343    9085 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-582000 force-systemd-env-582000
	I1107 16:33:33.782833    9085 network_create.go:108] docker network force-systemd-env-582000 192.168.94.0/24 created
	I1107 16:33:33.782882    9085 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-env-582000" container
	I1107 16:33:33.783016    9085 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 16:33:33.835195    9085 cli_runner.go:164] Run: docker volume create force-systemd-env-582000 --label name.minikube.sigs.k8s.io=force-systemd-env-582000 --label created_by.minikube.sigs.k8s.io=true
	I1107 16:33:33.885333    9085 oci.go:103] Successfully created a docker volume force-systemd-env-582000
	I1107 16:33:33.885453    9085 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-582000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-582000 --entrypoint /usr/bin/test -v force-systemd-env-582000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 16:33:34.162503    9085 oci.go:107] Successfully prepared a docker volume force-systemd-env-582000
	I1107 16:33:34.162538    9085 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:33:34.162550    9085 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 16:33:34.162649    9085 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-582000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 16:39:33.499424    9085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:39:33.499551    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:33.555268    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:33.555387    9085 retry.go:31] will retry after 244.419372ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:33.801041    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:33.856331    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:33.856452    9085 retry.go:31] will retry after 484.614441ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:34.341521    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:34.395223    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:34.395322    9085 retry.go:31] will retry after 585.177926ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:34.982871    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:35.057569    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	W1107 16:39:35.057685    9085 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	
	W1107 16:39:35.057702    9085 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:35.057759    9085 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:39:35.057812    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:35.108651    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:35.108754    9085 retry.go:31] will retry after 258.694361ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:35.369980    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:35.424101    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:35.424201    9085 retry.go:31] will retry after 223.792765ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:35.650425    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:35.704255    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:35.704354    9085 retry.go:31] will retry after 370.210844ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:36.076937    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:36.131840    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:36.131936    9085 retry.go:31] will retry after 537.051077ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:36.671438    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:36.725902    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	W1107 16:39:36.726006    9085 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	
	W1107 16:39:36.726024    9085 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:36.726048    9085 start.go:128] duration metric: createHost completed in 6m3.251738306s
	I1107 16:39:36.726115    9085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:39:36.726169    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:36.776182    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:36.776268    9085 retry.go:31] will retry after 313.042598ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:37.091350    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:37.146321    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:37.146412    9085 retry.go:31] will retry after 458.709924ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:37.607508    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:37.661841    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:37.661942    9085 retry.go:31] will retry after 460.367711ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:38.123363    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:38.175985    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	W1107 16:39:38.176085    9085 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	
	W1107 16:39:38.176103    9085 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:38.176161    9085 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:39:38.176224    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:38.226832    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:38.226937    9085 retry.go:31] will retry after 373.835759ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:38.603131    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:38.656226    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:38.656315    9085 retry.go:31] will retry after 502.206495ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:39.158791    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:39.212141    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	I1107 16:39:39.212241    9085 retry.go:31] will retry after 599.735956ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:39.813618    9085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000
	W1107 16:39:39.866118    9085 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000 returned with exit code 1
	W1107 16:39:39.866217    9085 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	
	W1107 16:39:39.866231    9085 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-582000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-582000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	I1107 16:39:39.866246    9085 fix.go:56] fixHost completed within 6m21.919799085s
	I1107 16:39:39.866256    9085 start.go:83] releasing machines lock for "force-systemd-env-582000", held for 6m21.919846828s
	W1107 16:39:39.866330    9085 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-582000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-582000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1107 16:39:39.909557    9085 out.go:177] 
	W1107 16:39:39.931589    9085 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1107 16:39:39.931636    9085 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1107 16:39:39.931682    9085 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1107 16:39:39.975655    9085 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-582000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-582000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-582000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (200.514728ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-582000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-11-07 16:39:40.251053 -0800 PST m=+5959.240402373
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-582000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-582000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-582000",
	        "Id": "b11ed363b5368cd5eba0fc94759e601a207b22d48b9b4c39eaf3f1e52cea176d",
	        "Created": "2023-11-08T00:33:33.743879788Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-582000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-582000 -n force-systemd-env-582000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-582000 -n force-systemd-env-582000: exit status 7 (106.256561ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 16:39:40.409953    9607 status.go:249] status error: host: state: unknown state "force-systemd-env-582000": docker container inspect force-systemd-env-582000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-582000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-582000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-582000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-582000
--- FAIL: TestForceSystemdEnv (751.16s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-darwin-amd64 license: exit status 40 (233.766118ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (260.71s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-973000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1107 15:11:27.503298    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:13:43.648878    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:14:06.500793    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:06.506282    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:06.518161    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:06.540329    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:06.580878    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:06.663121    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:06.823841    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:07.143966    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:07.785580    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:09.067702    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:11.340794    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:14:11.628865    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:16.750622    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:26.990703    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:14:47.470683    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:15:28.431223    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-973000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m20.672022303s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-973000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-973000 in cluster ingress-addon-legacy-973000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 15:11:17.890386    4786 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:11:17.890650    4786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:11:17.890656    4786 out.go:309] Setting ErrFile to fd 2...
	I1107 15:11:17.890660    4786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:11:17.890830    4786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 15:11:17.892265    4786 out.go:303] Setting JSON to false
	I1107 15:11:17.914612    4786 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2451,"bootTime":1699396226,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1107 15:11:17.914723    4786 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1107 15:11:17.936544    4786 out.go:177] * [ingress-addon-legacy-973000] minikube v1.32.0 on Darwin 14.1
	I1107 15:11:17.979116    4786 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 15:11:17.979197    4786 notify.go:220] Checking for updates...
	I1107 15:11:18.021980    4786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	I1107 15:11:18.042995    4786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 15:11:18.064235    4786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 15:11:18.085132    4786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	I1107 15:11:18.106043    4786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 15:11:18.127753    4786 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 15:11:18.184444    4786 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.0 (126437)
	I1107 15:11:18.184592    4786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 15:11:18.284072    4786 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:54 SystemTime:2023-11-07 23:11:18.274962932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 15:11:18.305464    4786 out.go:177] * Using the docker driver based on user configuration
	I1107 15:11:18.326093    4786 start.go:298] selected driver: docker
	I1107 15:11:18.326119    4786 start.go:902] validating driver "docker" against <nil>
	I1107 15:11:18.326135    4786 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 15:11:18.330289    4786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 15:11:18.431688    4786 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:54 SystemTime:2023-11-07 23:11:18.422383947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 15:11:18.431865    4786 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 15:11:18.432047    4786 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 15:11:18.452798    4786 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 15:11:18.473833    4786 cni.go:84] Creating CNI manager for ""
	I1107 15:11:18.473873    4786 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1107 15:11:18.473890    4786 start_flags.go:323] config:
	{Name:ingress-addon-legacy-973000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-973000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 15:11:18.496773    4786 out.go:177] * Starting control plane node ingress-addon-legacy-973000 in cluster ingress-addon-legacy-973000
	I1107 15:11:18.538796    4786 cache.go:121] Beginning downloading kic base image for docker with docker
	I1107 15:11:18.559565    4786 out.go:177] * Pulling base image ...
	I1107 15:11:18.601898    4786 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1107 15:11:18.602005    4786 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 15:11:18.652598    4786 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1107 15:11:18.652629    4786 cache.go:56] Caching tarball of preloaded images
	I1107 15:11:18.652829    4786 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1107 15:11:18.673523    4786 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1107 15:11:18.654113    4786 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 15:11:18.715892    4786 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 15:11:18.715892    4786 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1107 15:11:18.803614    4786 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1107 15:11:24.993038    4786 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1107 15:11:24.993229    4786 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1107 15:11:25.615165    4786 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1107 15:11:25.615417    4786 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/config.json ...
	I1107 15:11:25.615440    4786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/config.json: {Name:mk41e9ce697e0922b1f13aacbe29a495450bef33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 15:11:25.615768    4786 cache.go:194] Successfully downloaded all kic artifacts
	I1107 15:11:25.615803    4786 start.go:365] acquiring machines lock for ingress-addon-legacy-973000: {Name:mk058ff0bf1f3a23da0b017b028d053516147fb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 15:11:25.615962    4786 start.go:369] acquired machines lock for "ingress-addon-legacy-973000" in 102.009µs
	I1107 15:11:25.616007    4786 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-973000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-973000 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 15:11:25.616055    4786 start.go:125] createHost starting for "" (driver="docker")
	I1107 15:11:25.647035    4786 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1107 15:11:25.647359    4786 start.go:159] libmachine.API.Create for "ingress-addon-legacy-973000" (driver="docker")
	I1107 15:11:25.647409    4786 client.go:168] LocalClient.Create starting
	I1107 15:11:25.647574    4786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 15:11:25.647675    4786 main.go:141] libmachine: Decoding PEM data...
	I1107 15:11:25.647707    4786 main.go:141] libmachine: Parsing certificate...
	I1107 15:11:25.647808    4786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 15:11:25.647880    4786 main.go:141] libmachine: Decoding PEM data...
	I1107 15:11:25.647895    4786 main.go:141] libmachine: Parsing certificate...
	I1107 15:11:25.667971    4786 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-973000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 15:11:25.721006    4786 cli_runner.go:211] docker network inspect ingress-addon-legacy-973000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 15:11:25.721123    4786 network_create.go:281] running [docker network inspect ingress-addon-legacy-973000] to gather additional debugging logs...
	I1107 15:11:25.721143    4786 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-973000
	W1107 15:11:25.771709    4786 cli_runner.go:211] docker network inspect ingress-addon-legacy-973000 returned with exit code 1
	I1107 15:11:25.771742    4786 network_create.go:284] error running [docker network inspect ingress-addon-legacy-973000]: docker network inspect ingress-addon-legacy-973000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-973000 not found
	I1107 15:11:25.771761    4786 network_create.go:286] output of [docker network inspect ingress-addon-legacy-973000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-973000 not found
	
	** /stderr **
	I1107 15:11:25.771891    4786 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 15:11:25.822339    4786 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021c3860}
	I1107 15:11:25.822380    4786 network_create.go:124] attempt to create docker network ingress-addon-legacy-973000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I1107 15:11:25.822452    4786 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-973000 ingress-addon-legacy-973000
	I1107 15:11:25.907669    4786 network_create.go:108] docker network ingress-addon-legacy-973000 192.168.49.0/24 created
	I1107 15:11:25.907713    4786 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-973000" container
	I1107 15:11:25.907831    4786 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 15:11:25.958053    4786 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-973000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-973000 --label created_by.minikube.sigs.k8s.io=true
	I1107 15:11:26.009564    4786 oci.go:103] Successfully created a docker volume ingress-addon-legacy-973000
	I1107 15:11:26.009720    4786 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-973000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-973000 --entrypoint /usr/bin/test -v ingress-addon-legacy-973000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 15:11:26.376368    4786 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-973000
	I1107 15:11:26.376435    4786 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1107 15:11:26.376448    4786 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 15:11:26.376560    4786 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-973000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 15:11:28.558394    4786 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-973000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.181804375s)
	I1107 15:11:28.558425    4786 kic.go:203] duration metric: took 2.182016 seconds to extract preloaded images to volume
	I1107 15:11:28.558551    4786 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1107 15:11:28.656522    4786 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-973000 --name ingress-addon-legacy-973000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-973000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-973000 --network ingress-addon-legacy-973000 --ip 192.168.49.2 --volume ingress-addon-legacy-973000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1107 15:11:28.935870    4786 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-973000 --format={{.State.Running}}
	I1107 15:11:28.990380    4786 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-973000 --format={{.State.Status}}
	I1107 15:11:29.044300    4786 cli_runner.go:164] Run: docker exec ingress-addon-legacy-973000 stat /var/lib/dpkg/alternatives/iptables
	I1107 15:11:29.141275    4786 oci.go:144] the created container "ingress-addon-legacy-973000" has a running status.
	I1107 15:11:29.141331    4786 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17585-1518/.minikube/machines/ingress-addon-legacy-973000/id_rsa...
	I1107 15:11:29.288138    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/machines/ingress-addon-legacy-973000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1107 15:11:29.288193    4786 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17585-1518/.minikube/machines/ingress-addon-legacy-973000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1107 15:11:29.346729    4786 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-973000 --format={{.State.Status}}
	I1107 15:11:29.402641    4786 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1107 15:11:29.402664    4786 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-973000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1107 15:11:29.512009    4786 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-973000 --format={{.State.Status}}
	I1107 15:11:29.562976    4786 machine.go:88] provisioning docker machine ...
	I1107 15:11:29.563034    4786 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-973000"
	I1107 15:11:29.563134    4786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:11:29.614166    4786 main.go:141] libmachine: Using SSH client type: native
	I1107 15:11:29.614499    4786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1405ca0] 0x1408980 <nil>  [] 0s} 127.0.0.1 50479 <nil> <nil>}
	I1107 15:11:29.614512    4786 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-973000 && echo "ingress-addon-legacy-973000" | sudo tee /etc/hostname
	I1107 15:11:29.740662    4786 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-973000
	
	I1107 15:11:29.740748    4786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:11:29.792434    4786 main.go:141] libmachine: Using SSH client type: native
	I1107 15:11:29.792737    4786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1405ca0] 0x1408980 <nil>  [] 0s} 127.0.0.1 50479 <nil> <nil>}
	I1107 15:11:29.792752    4786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-973000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-973000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-973000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1107 15:11:29.908181    4786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1107 15:11:29.908202    4786 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17585-1518/.minikube CaCertPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17585-1518/.minikube}
	I1107 15:11:29.908231    4786 ubuntu.go:177] setting up certificates
	I1107 15:11:29.908243    4786 provision.go:83] configureAuth start
	I1107 15:11:29.908321    4786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-973000
	I1107 15:11:29.958819    4786 provision.go:138] copyHostCerts
	I1107 15:11:29.958867    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17585-1518/.minikube/ca.pem
	I1107 15:11:29.958925    4786 exec_runner.go:144] found /Users/jenkins/minikube-integration/17585-1518/.minikube/ca.pem, removing ...
	I1107 15:11:29.958932    4786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17585-1518/.minikube/ca.pem
	I1107 15:11:29.959047    4786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17585-1518/.minikube/ca.pem (1078 bytes)
	I1107 15:11:29.959227    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17585-1518/.minikube/cert.pem
	I1107 15:11:29.959254    4786 exec_runner.go:144] found /Users/jenkins/minikube-integration/17585-1518/.minikube/cert.pem, removing ...
	I1107 15:11:29.959258    4786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17585-1518/.minikube/cert.pem
	I1107 15:11:29.959332    4786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17585-1518/.minikube/cert.pem (1123 bytes)
	I1107 15:11:29.959487    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17585-1518/.minikube/key.pem
	I1107 15:11:29.959530    4786 exec_runner.go:144] found /Users/jenkins/minikube-integration/17585-1518/.minikube/key.pem, removing ...
	I1107 15:11:29.959535    4786 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17585-1518/.minikube/key.pem
	I1107 15:11:29.959608    4786 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17585-1518/.minikube/key.pem (1679 bytes)
	I1107 15:11:29.959750    4786 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17585-1518/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-973000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-973000]
	I1107 15:11:30.099308    4786 provision.go:172] copyRemoteCerts
	I1107 15:11:30.099365    4786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1107 15:11:30.099421    4786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:11:30.150839    4786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50479 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/ingress-addon-legacy-973000/id_rsa Username:docker}
	I1107 15:11:30.236817    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1107 15:11:30.236894    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1107 15:11:30.256808    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1107 15:11:30.256871    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1107 15:11:30.277035    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1107 15:11:30.277105    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1107 15:11:30.297706    4786 provision.go:86] duration metric: configureAuth took 389.455956ms
	I1107 15:11:30.297721    4786 ubuntu.go:193] setting minikube options for container-runtime
	I1107 15:11:30.297864    4786 config.go:182] Loaded profile config "ingress-addon-legacy-973000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1107 15:11:30.297929    4786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:11:30.349425    4786 main.go:141] libmachine: Using SSH client type: native
	I1107 15:11:30.349709    4786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1405ca0] 0x1408980 <nil>  [] 0s} 127.0.0.1 50479 <nil> <nil>}
	I1107 15:11:30.349727    4786 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1107 15:11:30.464972    4786 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1107 15:11:30.464987    4786 ubuntu.go:71] root file system type: overlay
	I1107 15:11:30.465071    4786 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1107 15:11:30.465160    4786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:11:30.516948    4786 main.go:141] libmachine: Using SSH client type: native
	I1107 15:11:30.517257    4786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1405ca0] 0x1408980 <nil>  [] 0s} 127.0.0.1 50479 <nil> <nil>}
	I1107 15:11:30.517320    4786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1107 15:11:30.642039    4786 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1107 15:11:30.642158    4786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:11:30.694609    4786 main.go:141] libmachine: Using SSH client type: native
	I1107 15:11:30.694918    4786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1405ca0] 0x1408980 <nil>  [] 0s} 127.0.0.1 50479 <nil> <nil>}
	I1107 15:11:30.694933    4786 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1107 15:11:31.252547    4786 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-11-07 23:11:30.640113016 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1107 15:11:31.252574    4786 machine.go:91] provisioned docker machine in 1.689599766s
	I1107 15:11:31.252588    4786 client.go:171] LocalClient.Create took 5.605278865s
	I1107 15:11:31.252603    4786 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-973000" took 5.605357175s
	I1107 15:11:31.252615    4786 start.go:300] post-start starting for "ingress-addon-legacy-973000" (driver="docker")
	I1107 15:11:31.252624    4786 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1107 15:11:31.252701    4786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1107 15:11:31.252759    4786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:11:31.322011    4786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50479 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/ingress-addon-legacy-973000/id_rsa Username:docker}
	I1107 15:11:31.408038    4786 ssh_runner.go:195] Run: cat /etc/os-release
	I1107 15:11:31.412097    4786 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1107 15:11:31.412119    4786 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1107 15:11:31.412126    4786 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1107 15:11:31.412131    4786 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1107 15:11:31.412141    4786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17585-1518/.minikube/addons for local assets ...
	I1107 15:11:31.412253    4786 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17585-1518/.minikube/files for local assets ...
	I1107 15:11:31.412456    4786 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17585-1518/.minikube/files/etc/ssl/certs/20892.pem -> 20892.pem in /etc/ssl/certs
	I1107 15:11:31.412463    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/files/etc/ssl/certs/20892.pem -> /etc/ssl/certs/20892.pem
	I1107 15:11:31.412665    4786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1107 15:11:31.421385    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/files/etc/ssl/certs/20892.pem --> /etc/ssl/certs/20892.pem (1708 bytes)
	I1107 15:11:31.441957    4786 start.go:303] post-start completed in 189.336822ms
	I1107 15:11:31.442531    4786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-973000
	I1107 15:11:31.493742    4786 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/config.json ...
	I1107 15:11:31.494198    4786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 15:11:31.494284    4786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:11:31.544841    4786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50479 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/ingress-addon-legacy-973000/id_rsa Username:docker}
	I1107 15:11:31.628590    4786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 15:11:31.633385    4786 start.go:128] duration metric: createHost completed in 6.01743439s
	I1107 15:11:31.633404    4786 start.go:83] releasing machines lock for "ingress-addon-legacy-973000", held for 6.017551828s
	I1107 15:11:31.633478    4786 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-973000
	I1107 15:11:31.684476    4786 ssh_runner.go:195] Run: cat /version.json
	I1107 15:11:31.684503    4786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1107 15:11:31.684556    4786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:11:31.684575    4786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:11:31.737005    4786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50479 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/ingress-addon-legacy-973000/id_rsa Username:docker}
	I1107 15:11:31.737015    4786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50479 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/ingress-addon-legacy-973000/id_rsa Username:docker}
	I1107 15:11:31.926496    4786 ssh_runner.go:195] Run: systemctl --version
	I1107 15:11:31.931400    4786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1107 15:11:31.936269    4786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1107 15:11:31.958288    4786 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1107 15:11:31.958374    4786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1107 15:11:31.973439    4786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1107 15:11:31.988964    4786 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1107 15:11:31.988981    4786 start.go:472] detecting cgroup driver to use...
	I1107 15:11:31.988996    4786 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 15:11:31.989107    4786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 15:11:32.004309    4786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1107 15:11:32.013931    4786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1107 15:11:32.023350    4786 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1107 15:11:32.023415    4786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1107 15:11:32.033006    4786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1107 15:11:32.042667    4786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1107 15:11:32.052170    4786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1107 15:11:32.061701    4786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1107 15:11:32.070413    4786 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1107 15:11:32.079586    4786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1107 15:11:32.087720    4786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1107 15:11:32.095877    4786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 15:11:32.145741    4786 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1107 15:11:32.225917    4786 start.go:472] detecting cgroup driver to use...
	I1107 15:11:32.225938    4786 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1107 15:11:32.226005    4786 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1107 15:11:32.242685    4786 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1107 15:11:32.242753    4786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1107 15:11:32.253875    4786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1107 15:11:32.270863    4786 ssh_runner.go:195] Run: which cri-dockerd
	I1107 15:11:32.275551    4786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1107 15:11:32.285070    4786 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1107 15:11:32.326702    4786 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1107 15:11:32.384202    4786 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1107 15:11:32.463690    4786 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1107 15:11:32.463791    4786 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1107 15:11:32.480742    4786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 15:11:32.553116    4786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 15:11:32.782524    4786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 15:11:32.807604    4786 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1107 15:11:32.874353    4786 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I1107 15:11:32.874497    4786 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-973000 dig +short host.docker.internal
	I1107 15:11:32.990256    4786 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1107 15:11:32.990363    4786 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1107 15:11:32.995094    4786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 15:11:33.005875    4786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:11:33.056067    4786 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1107 15:11:33.056144    4786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 15:11:33.074250    4786 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1107 15:11:33.074271    4786 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1107 15:11:33.074356    4786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1107 15:11:33.083219    4786 ssh_runner.go:195] Run: which lz4
	I1107 15:11:33.087190    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1107 15:11:33.087312    4786 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1107 15:11:33.091341    4786 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1107 15:11:33.091361    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I1107 15:11:38.637427    4786 docker.go:635] Took 5.550276 seconds to copy over tarball
	I1107 15:11:38.637496    4786 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1107 15:11:40.296016    4786 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.658536426s)
	I1107 15:11:40.296030    4786 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1107 15:11:40.339259    4786 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1107 15:11:40.347915    4786 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1107 15:11:40.362952    4786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1107 15:11:40.412020    4786 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1107 15:11:41.414817    4786 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.002796442s)
	I1107 15:11:41.414906    4786 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1107 15:11:41.433289    4786 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1107 15:11:41.433301    4786 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1107 15:11:41.433311    4786 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1107 15:11:41.438733    4786 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 15:11:41.438917    4786 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1107 15:11:41.439552    4786 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 15:11:41.439566    4786 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1107 15:11:41.439702    4786 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 15:11:41.439735    4786 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 15:11:41.439748    4786 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 15:11:41.440214    4786 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1107 15:11:41.444730    4786 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 15:11:41.444795    4786 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 15:11:41.444710    4786 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1107 15:11:41.444809    4786 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 15:11:41.445137    4786 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 15:11:41.445937    4786 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1107 15:11:41.446141    4786 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 15:11:41.446257    4786 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1107 15:11:41.864864    4786 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1107 15:11:41.885858    4786 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1107 15:11:41.885907    4786 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1107 15:11:41.885976    4786 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1107 15:11:41.904907    4786 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1107 15:11:41.911046    4786 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1107 15:11:41.928204    4786 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1107 15:11:41.928228    4786 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I1107 15:11:41.928294    4786 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1107 15:11:41.931566    4786 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1107 15:11:41.936715    4786 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1107 15:11:41.947195    4786 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1107 15:11:41.952590    4786 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1107 15:11:41.952623    4786 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1107 15:11:41.952686    4786 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1107 15:11:41.957415    4786 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1107 15:11:41.957437    4786 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1107 15:11:41.957500    4786 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1107 15:11:41.972345    4786 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1107 15:11:41.977825    4786 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1107 15:11:42.016303    4786 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1107 15:11:42.032790    4786 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 15:11:42.035751    4786 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1107 15:11:42.035778    4786 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I1107 15:11:42.035836    4786 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1107 15:11:42.053024    4786 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1107 15:11:42.053060    4786 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 15:11:42.053148    4786 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1107 15:11:42.055954    4786 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1107 15:11:42.070656    4786 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1107 15:11:42.089947    4786 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1107 15:11:42.108370    4786 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1107 15:11:42.108395    4786 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1107 15:11:42.108456    4786 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1107 15:11:42.127020    4786 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1107 15:11:42.458955    4786 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1107 15:11:42.479468    4786 cache_images.go:92] LoadImages completed in 1.046164432s
	W1107 15:11:42.479516    4786 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I1107 15:11:42.479600    4786 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1107 15:11:42.525682    4786 cni.go:84] Creating CNI manager for ""
	I1107 15:11:42.525699    4786 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1107 15:11:42.525718    4786 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1107 15:11:42.525734    4786 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-973000 NodeName:ingress-addon-legacy-973000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1107 15:11:42.525824    4786 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-973000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1107 15:11:42.525886    4786 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-973000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-973000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1107 15:11:42.525945    4786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1107 15:11:42.535041    4786 binaries.go:44] Found k8s binaries, skipping transfer
	I1107 15:11:42.535106    4786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1107 15:11:42.543436    4786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1107 15:11:42.558583    4786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1107 15:11:42.573911    4786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1107 15:11:42.589301    4786 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1107 15:11:42.593473    4786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1107 15:11:42.603762    4786 certs.go:56] Setting up /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000 for IP: 192.168.49.2
	I1107 15:11:42.603785    4786 certs.go:190] acquiring lock for shared ca certs: {Name:mk0745f65f55ca30ba321b5b3d749606602acfc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 15:11:42.603963    4786 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17585-1518/.minikube/ca.key
	I1107 15:11:42.604040    4786 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17585-1518/.minikube/proxy-client-ca.key
	I1107 15:11:42.604087    4786 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/client.key
	I1107 15:11:42.604100    4786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/client.crt with IP's: []
	I1107 15:11:42.744407    4786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/client.crt ...
	I1107 15:11:42.744423    4786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/client.crt: {Name:mk629135d7cd83cd16e3918e24015ae057703021 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 15:11:42.744738    4786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/client.key ...
	I1107 15:11:42.744747    4786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/client.key: {Name:mk0e5c7508be580c2722ef0610a6766259995045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 15:11:42.744995    4786 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.key.dd3b5fb2
	I1107 15:11:42.745011    4786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1107 15:11:42.803254    4786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.crt.dd3b5fb2 ...
	I1107 15:11:42.803272    4786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.crt.dd3b5fb2: {Name:mk9bbbc295ba2b6fbf7a0cae3213abe2da725cf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 15:11:42.803514    4786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.key.dd3b5fb2 ...
	I1107 15:11:42.803523    4786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.key.dd3b5fb2: {Name:mk46c141997d71907355a786de6ee074029c0f7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 15:11:42.803715    4786 certs.go:337] copying /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.crt
	I1107 15:11:42.803882    4786 certs.go:341] copying /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.key
	I1107 15:11:42.804037    4786 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/proxy-client.key
	I1107 15:11:42.804050    4786 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/proxy-client.crt with IP's: []
	I1107 15:11:42.916360    4786 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/proxy-client.crt ...
	I1107 15:11:42.916373    4786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/proxy-client.crt: {Name:mkacbc53699d1a6290fb9138a53937589fd5dd53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 15:11:42.917106    4786 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/proxy-client.key ...
	I1107 15:11:42.917115    4786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/proxy-client.key: {Name:mk7b804633731aad3166c1d723e2e0d512dd79b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 15:11:42.917337    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1107 15:11:42.917364    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1107 15:11:42.917384    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1107 15:11:42.917407    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1107 15:11:42.917424    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1107 15:11:42.917454    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1107 15:11:42.917482    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1107 15:11:42.917503    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1107 15:11:42.917590    4786 certs.go:437] found cert: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/Users/jenkins/minikube-integration/17585-1518/.minikube/certs/2089.pem (1338 bytes)
	W1107 15:11:42.917636    4786 certs.go:433] ignoring /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/Users/jenkins/minikube-integration/17585-1518/.minikube/certs/2089_empty.pem, impossibly tiny 0 bytes
	I1107 15:11:42.917648    4786 certs.go:437] found cert: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca-key.pem (1675 bytes)
	I1107 15:11:42.917682    4786 certs.go:437] found cert: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem (1078 bytes)
	I1107 15:11:42.917714    4786 certs.go:437] found cert: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem (1123 bytes)
	I1107 15:11:42.917743    4786 certs.go:437] found cert: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/Users/jenkins/minikube-integration/17585-1518/.minikube/certs/key.pem (1679 bytes)
	I1107 15:11:42.917808    4786 certs.go:437] found cert: /Users/jenkins/minikube-integration/17585-1518/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17585-1518/.minikube/files/etc/ssl/certs/20892.pem (1708 bytes)
	I1107 15:11:42.917845    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/files/etc/ssl/certs/20892.pem -> /usr/share/ca-certificates/20892.pem
	I1107 15:11:42.917864    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1107 15:11:42.917879    4786 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/2089.pem -> /usr/share/ca-certificates/2089.pem
	I1107 15:11:42.918356    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1107 15:11:42.939473    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1107 15:11:42.959733    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1107 15:11:42.980477    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/ingress-addon-legacy-973000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1107 15:11:43.000807    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1107 15:11:43.021012    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1107 15:11:43.042280    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1107 15:11:43.062950    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1107 15:11:43.083672    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/files/etc/ssl/certs/20892.pem --> /usr/share/ca-certificates/20892.pem (1708 bytes)
	I1107 15:11:43.104411    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1107 15:11:43.125069    4786 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/2089.pem --> /usr/share/ca-certificates/2089.pem (1338 bytes)
	I1107 15:11:43.145885    4786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1107 15:11:43.161453    4786 ssh_runner.go:195] Run: openssl version
	I1107 15:11:43.166907    4786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20892.pem && ln -fs /usr/share/ca-certificates/20892.pem /etc/ssl/certs/20892.pem"
	I1107 15:11:43.176218    4786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20892.pem
	I1107 15:11:43.180337    4786 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  7 23:06 /usr/share/ca-certificates/20892.pem
	I1107 15:11:43.180382    4786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20892.pem
	I1107 15:11:43.186939    4786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20892.pem /etc/ssl/certs/3ec20f2e.0"
	I1107 15:11:43.196449    4786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1107 15:11:43.205473    4786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1107 15:11:43.209701    4786 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  7 23:01 /usr/share/ca-certificates/minikubeCA.pem
	I1107 15:11:43.209747    4786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1107 15:11:43.216319    4786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1107 15:11:43.225232    4786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2089.pem && ln -fs /usr/share/ca-certificates/2089.pem /etc/ssl/certs/2089.pem"
	I1107 15:11:43.234106    4786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2089.pem
	I1107 15:11:43.238327    4786 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  7 23:06 /usr/share/ca-certificates/2089.pem
	I1107 15:11:43.238373    4786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2089.pem
	I1107 15:11:43.244865    4786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2089.pem /etc/ssl/certs/51391683.0"
	I1107 15:11:43.253687    4786 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1107 15:11:43.257887    4786 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1107 15:11:43.257931    4786 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-973000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-973000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 15:11:43.258013    4786 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 15:11:43.275959    4786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1107 15:11:43.284963    4786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1107 15:11:43.293328    4786 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1107 15:11:43.293385    4786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 15:11:43.301528    4786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 15:11:43.301553    4786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 15:11:43.352369    4786 kubeadm.go:322] W1107 23:11:43.351768    1697 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1107 15:11:43.469105    4786 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1107 15:11:43.469255    4786 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1107 15:11:43.531706    4786 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1107 15:11:43.606650    4786 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 15:11:46.264333    4786 kubeadm.go:322] W1107 23:11:46.264057    1697 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 15:11:46.265176    4786 kubeadm.go:322] W1107 23:11:46.264863    1697 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 15:13:41.274717    4786 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 15:13:41.274906    4786 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1107 15:13:41.278227    4786 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1107 15:13:41.278267    4786 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 15:13:41.278341    4786 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 15:13:41.278444    4786 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 15:13:41.278539    4786 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 15:13:41.278641    4786 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 15:13:41.278754    4786 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 15:13:41.278802    4786 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 15:13:41.278859    4786 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 15:13:41.299814    4786 out.go:204]   - Generating certificates and keys ...
	I1107 15:13:41.299952    4786 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 15:13:41.300057    4786 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 15:13:41.300180    4786 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1107 15:13:41.300242    4786 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1107 15:13:41.300309    4786 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1107 15:13:41.300361    4786 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1107 15:13:41.300418    4786 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1107 15:13:41.300569    4786 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-973000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 15:13:41.300629    4786 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1107 15:13:41.300771    4786 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-973000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1107 15:13:41.300859    4786 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1107 15:13:41.300948    4786 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1107 15:13:41.300996    4786 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1107 15:13:41.301063    4786 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 15:13:41.301117    4786 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 15:13:41.301180    4786 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 15:13:41.301246    4786 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 15:13:41.301289    4786 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 15:13:41.301350    4786 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 15:13:41.343054    4786 out.go:204]   - Booting up control plane ...
	I1107 15:13:41.343197    4786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 15:13:41.343338    4786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 15:13:41.343454    4786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 15:13:41.343580    4786 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 15:13:41.343835    4786 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 15:13:41.343901    4786 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1107 15:13:41.344018    4786 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 15:13:41.344334    4786 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 15:13:41.344402    4786 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 15:13:41.344588    4786 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 15:13:41.344656    4786 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 15:13:41.344840    4786 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 15:13:41.344921    4786 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 15:13:41.345109    4786 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 15:13:41.345187    4786 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 15:13:41.345378    4786 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 15:13:41.345388    4786 kubeadm.go:322] 
	I1107 15:13:41.345430    4786 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1107 15:13:41.345469    4786 kubeadm.go:322] 		timed out waiting for the condition
	I1107 15:13:41.345476    4786 kubeadm.go:322] 
	I1107 15:13:41.345522    4786 kubeadm.go:322] 	This error is likely caused by:
	I1107 15:13:41.345556    4786 kubeadm.go:322] 		- The kubelet is not running
	I1107 15:13:41.345664    4786 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 15:13:41.345676    4786 kubeadm.go:322] 
	I1107 15:13:41.345799    4786 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 15:13:41.345832    4786 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1107 15:13:41.345867    4786 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1107 15:13:41.345876    4786 kubeadm.go:322] 
	I1107 15:13:41.345990    4786 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 15:13:41.346077    4786 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1107 15:13:41.346086    4786 kubeadm.go:322] 
	I1107 15:13:41.346179    4786 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1107 15:13:41.346230    4786 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1107 15:13:41.346307    4786 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1107 15:13:41.346343    4786 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1107 15:13:41.346352    4786 kubeadm.go:322] 
	W1107 15:13:41.346480    4786 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-973000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-973000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 23:11:43.351768    1697 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 23:11:46.264057    1697 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 23:11:46.264863    1697 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-973000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-973000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 23:11:43.351768    1697 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 23:11:46.264057    1697 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 23:11:46.264863    1697 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1107 15:13:41.346522    4786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1107 15:13:41.761195    4786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1107 15:13:41.772011    4786 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1107 15:13:41.772073    4786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1107 15:13:41.780280    4786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1107 15:13:41.780310    4786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1107 15:13:41.832189    4786 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1107 15:13:41.832268    4786 kubeadm.go:322] [preflight] Running pre-flight checks
	I1107 15:13:42.057476    4786 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1107 15:13:42.057562    4786 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1107 15:13:42.057637    4786 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1107 15:13:42.226477    4786 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1107 15:13:42.227334    4786 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1107 15:13:42.227373    4786 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1107 15:13:42.305652    4786 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1107 15:13:42.327116    4786 out.go:204]   - Generating certificates and keys ...
	I1107 15:13:42.327203    4786 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1107 15:13:42.327279    4786 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1107 15:13:42.327409    4786 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1107 15:13:42.327569    4786 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1107 15:13:42.327675    4786 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1107 15:13:42.327777    4786 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1107 15:13:42.327884    4786 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1107 15:13:42.327968    4786 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1107 15:13:42.328024    4786 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1107 15:13:42.328070    4786 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1107 15:13:42.328098    4786 kubeadm.go:322] [certs] Using the existing "sa" key
	I1107 15:13:42.328150    4786 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1107 15:13:42.567415    4786 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1107 15:13:42.650531    4786 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1107 15:13:42.897082    4786 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1107 15:13:42.988959    4786 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1107 15:13:42.989592    4786 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1107 15:13:43.011075    4786 out.go:204]   - Booting up control plane ...
	I1107 15:13:43.011202    4786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1107 15:13:43.011264    4786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1107 15:13:43.011314    4786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1107 15:13:43.011390    4786 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1107 15:13:43.011543    4786 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1107 15:14:22.998399    4786 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1107 15:14:22.999145    4786 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 15:14:22.999426    4786 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 15:14:28.000298    4786 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 15:14:28.000486    4786 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 15:14:38.001888    4786 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 15:14:38.002085    4786 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 15:14:58.003600    4786 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 15:14:58.003812    4786 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 15:15:38.004960    4786 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1107 15:15:38.005217    4786 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1107 15:15:38.005235    4786 kubeadm.go:322] 
	I1107 15:15:38.005292    4786 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1107 15:15:38.005376    4786 kubeadm.go:322] 		timed out waiting for the condition
	I1107 15:15:38.005388    4786 kubeadm.go:322] 
	I1107 15:15:38.005434    4786 kubeadm.go:322] 	This error is likely caused by:
	I1107 15:15:38.005492    4786 kubeadm.go:322] 		- The kubelet is not running
	I1107 15:15:38.005696    4786 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1107 15:15:38.005710    4786 kubeadm.go:322] 
	I1107 15:15:38.005840    4786 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1107 15:15:38.005878    4786 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1107 15:15:38.005914    4786 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1107 15:15:38.005923    4786 kubeadm.go:322] 
	I1107 15:15:38.006020    4786 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1107 15:15:38.006091    4786 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1107 15:15:38.006103    4786 kubeadm.go:322] 
	I1107 15:15:38.006162    4786 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1107 15:15:38.006212    4786 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1107 15:15:38.006290    4786 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1107 15:15:38.006319    4786 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1107 15:15:38.006325    4786 kubeadm.go:322] 
	I1107 15:15:38.007544    4786 kubeadm.go:322] W1107 23:13:41.831754    4765 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1107 15:15:38.007710    4786 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1107 15:15:38.007778    4786 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1107 15:15:38.007897    4786 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1107 15:15:38.007969    4786 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1107 15:15:38.008067    4786 kubeadm.go:322] W1107 23:13:42.994664    4765 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 15:15:38.008172    4786 kubeadm.go:322] W1107 23:13:42.995376    4765 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1107 15:15:38.008233    4786 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1107 15:15:38.008294    4786 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1107 15:15:38.008318    4786 kubeadm.go:406] StartCluster complete in 3m54.754957909s
	I1107 15:15:38.008409    4786 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1107 15:15:38.027277    4786 logs.go:284] 0 containers: []
	W1107 15:15:38.027291    4786 logs.go:286] No container was found matching "kube-apiserver"
	I1107 15:15:38.027357    4786 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1107 15:15:38.046058    4786 logs.go:284] 0 containers: []
	W1107 15:15:38.046071    4786 logs.go:286] No container was found matching "etcd"
	I1107 15:15:38.046139    4786 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1107 15:15:38.064860    4786 logs.go:284] 0 containers: []
	W1107 15:15:38.064873    4786 logs.go:286] No container was found matching "coredns"
	I1107 15:15:38.064943    4786 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1107 15:15:38.084228    4786 logs.go:284] 0 containers: []
	W1107 15:15:38.084242    4786 logs.go:286] No container was found matching "kube-scheduler"
	I1107 15:15:38.084318    4786 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1107 15:15:38.102928    4786 logs.go:284] 0 containers: []
	W1107 15:15:38.102943    4786 logs.go:286] No container was found matching "kube-proxy"
	I1107 15:15:38.103017    4786 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1107 15:15:38.120739    4786 logs.go:284] 0 containers: []
	W1107 15:15:38.120753    4786 logs.go:286] No container was found matching "kube-controller-manager"
	I1107 15:15:38.120829    4786 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1107 15:15:38.138360    4786 logs.go:284] 0 containers: []
	W1107 15:15:38.138373    4786 logs.go:286] No container was found matching "kindnet"
	I1107 15:15:38.138381    4786 logs.go:123] Gathering logs for kubelet ...
	I1107 15:15:38.138388    4786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1107 15:15:38.176254    4786 logs.go:123] Gathering logs for dmesg ...
	I1107 15:15:38.176274    4786 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1107 15:15:38.189188    4786 logs.go:123] Gathering logs for describe nodes ...
	I1107 15:15:38.189202    4786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1107 15:15:38.250069    4786 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1107 15:15:38.250082    4786 logs.go:123] Gathering logs for Docker ...
	I1107 15:15:38.250090    4786 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1107 15:15:38.265452    4786 logs.go:123] Gathering logs for container status ...
	I1107 15:15:38.265466    4786 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1107 15:15:38.314290    4786 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 23:13:41.831754    4765 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 23:13:42.994664    4765 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 23:13:42.995376    4765 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1107 15:15:38.314312    4786 out.go:239] * 
	* 
	W1107 15:15:38.314381    4786 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 23:13:41.831754    4765 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 23:13:42.994664    4765 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 23:13:42.995376    4765 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 23:13:41.831754    4765 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 23:13:42.994664    4765 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 23:13:42.995376    4765 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 15:15:38.314404    4786 out.go:239] * 
	* 
	W1107 15:15:38.315043    4786 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 15:15:38.377592    4786 out.go:177] 
	W1107 15:15:38.419289    4786 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 23:13:41.831754    4765 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 23:13:42.994664    4765 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 23:13:42.995376    4765 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1107 23:13:41.831754    4765 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1107 23:13:42.994664    4765 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1107 23:13:42.995376    4765 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1107 15:15:38.419335    4786 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1107 15:15:38.419356    4786 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1107 15:15:38.440582    4786 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-973000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (260.71s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (109.7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-973000 addons enable ingress --alsologtostderr -v=5
E1107 15:16:50.432335    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-973000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m49.277558159s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 15:15:38.599641    4980 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:15:38.600058    4980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:15:38.600066    4980 out.go:309] Setting ErrFile to fd 2...
	I1107 15:15:38.600070    4980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:15:38.600267    4980 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 15:15:38.600659    4980 mustload.go:65] Loading cluster: ingress-addon-legacy-973000
	I1107 15:15:38.600961    4980 config.go:182] Loaded profile config "ingress-addon-legacy-973000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1107 15:15:38.600978    4980 addons.go:594] checking whether the cluster is paused
	I1107 15:15:38.601060    4980 config.go:182] Loaded profile config "ingress-addon-legacy-973000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1107 15:15:38.601077    4980 host.go:66] Checking if "ingress-addon-legacy-973000" exists ...
	I1107 15:15:38.601485    4980 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-973000 --format={{.State.Status}}
	I1107 15:15:38.653354    4980 ssh_runner.go:195] Run: systemctl --version
	I1107 15:15:38.653490    4980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:15:38.705779    4980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50479 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/ingress-addon-legacy-973000/id_rsa Username:docker}
	I1107 15:15:38.787509    4980 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 15:15:38.827379    4980 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1107 15:15:38.848248    4980 config.go:182] Loaded profile config "ingress-addon-legacy-973000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1107 15:15:38.848274    4980 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-973000"
	I1107 15:15:38.848285    4980 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-973000"
	I1107 15:15:38.848341    4980 host.go:66] Checking if "ingress-addon-legacy-973000" exists ...
	I1107 15:15:38.848961    4980 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-973000 --format={{.State.Status}}
	I1107 15:15:38.921006    4980 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1107 15:15:38.942114    4980 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1107 15:15:38.963111    4980 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1107 15:15:38.984159    4980 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1107 15:15:39.005502    4980 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1107 15:15:39.005536    4980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1107 15:15:39.005689    4980 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:15:39.058816    4980 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50479 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/ingress-addon-legacy-973000/id_rsa Username:docker}
	I1107 15:15:39.152902    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:15:39.213275    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:39.213304    4980 retry.go:31] will retry after 359.361711ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:39.573477    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:15:39.625140    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:39.625159    4980 retry.go:31] will retry after 256.467778ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:39.883187    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:15:39.932851    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:39.932873    4980 retry.go:31] will retry after 438.789094ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:40.372657    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:15:40.428360    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:40.428388    4980 retry.go:31] will retry after 475.86972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:40.904743    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:15:40.956253    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:40.956271    4980 retry.go:31] will retry after 862.276742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:41.819088    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:15:41.870786    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:41.870802    4980 retry.go:31] will retry after 2.523699471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:44.395838    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:15:44.449375    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:44.449392    4980 retry.go:31] will retry after 3.005104007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:47.455207    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:15:47.519464    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:47.519481    4980 retry.go:31] will retry after 3.333528959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:50.854122    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:15:50.911813    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:50.911830    4980 retry.go:31] will retry after 5.467173491s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:56.380326    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:15:56.434051    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:15:56.434073    4980 retry.go:31] will retry after 12.863164957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:16:09.298495    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:16:09.346913    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:16:09.346939    4980 retry.go:31] will retry after 11.755305771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:16:21.102225    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:16:21.154219    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:16:21.154235    4980 retry.go:31] will retry after 11.832167441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:16:32.987221    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:16:33.038407    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:16:33.038431    4980 retry.go:31] will retry after 28.469791822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:01.592163    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:17:01.654699    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:01.654718    4980 retry.go:31] will retry after 26.073125416s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:27.727739    4980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1107 15:17:27.787906    4980 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:27.787937    4980 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-973000"
	I1107 15:17:27.808386    4980 out.go:177] * Verifying ingress addon...
	I1107 15:17:27.830562    4980 out.go:177] 
	W1107 15:17:27.852530    4980 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-973000" does not exist: client config: context "ingress-addon-legacy-973000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-973000" does not exist: client config: context "ingress-addon-legacy-973000" does not exist]
	W1107 15:17:27.852561    4980 out.go:239] * 
	* 
	W1107 15:17:27.856121    4980 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 15:17:27.877243    4980 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-973000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-973000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875",
	        "Created": "2023-11-07T23:11:28.707079906Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:11:28.92857715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dbc648475405a75e8c472743ce721cb0b74db98d9501831a17a27a54e2bd3e47",
	        "ResolvConfPath": "/var/lib/docker/containers/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875/hostname",
	        "HostsPath": "/var/lib/docker/containers/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875/hosts",
	        "LogPath": "/var/lib/docker/containers/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875-json.log",
	        "Name": "/ingress-addon-legacy-973000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-973000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-973000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/77f124f5a78552e7f54c20b7bf36580f1c4eef9af05a1b50d00a7bf8b9a6e849-init/diff:/var/lib/docker/overlay2/ccb94ba9de0d290b85c0dabc7d56c5897d096ed52875ea597c5a32f47fbd6d5e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77f124f5a78552e7f54c20b7bf36580f1c4eef9af05a1b50d00a7bf8b9a6e849/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77f124f5a78552e7f54c20b7bf36580f1c4eef9af05a1b50d00a7bf8b9a6e849/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77f124f5a78552e7f54c20b7bf36580f1c4eef9af05a1b50d00a7bf8b9a6e849/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-973000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-973000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-973000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-973000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-973000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc9b5d0a0dd50fcdb48faa0793c410d694f64a07ed1c11ea33ba4909d711f796",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50479"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50475"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50476"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50477"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50478"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cc9b5d0a0dd5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-973000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9d4218765f63",
	                        "ingress-addon-legacy-973000"
	                    ],
	                    "NetworkID": "329230eb71db3c6da5787d3a5a7b78591799f8ed084f50a57f1438cf543d2e40",
	                    "EndpointID": "7e52dd87d9687ec9ac5dc5889073c955dc3d834c2f673a85a26666c3d46dbc9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-973000 -n ingress-addon-legacy-973000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-973000 -n ingress-addon-legacy-973000: exit status 6 (369.031189ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:17:28.314139    5020 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-973000" does not appear in /Users/jenkins/minikube-integration/17585-1518/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-973000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (109.70s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-973000 addons enable ingress-dns --alsologtostderr -v=5
E1107 15:18:43.725382    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-973000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.192430784s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 15:17:28.379352    5030 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:17:28.379668    5030 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:17:28.379674    5030 out.go:309] Setting ErrFile to fd 2...
	I1107 15:17:28.379678    5030 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:17:28.379850    5030 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 15:17:28.380192    5030 mustload.go:65] Loading cluster: ingress-addon-legacy-973000
	I1107 15:17:28.380468    5030 config.go:182] Loaded profile config "ingress-addon-legacy-973000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1107 15:17:28.380486    5030 addons.go:594] checking whether the cluster is paused
	I1107 15:17:28.380563    5030 config.go:182] Loaded profile config "ingress-addon-legacy-973000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1107 15:17:28.380579    5030 host.go:66] Checking if "ingress-addon-legacy-973000" exists ...
	I1107 15:17:28.380954    5030 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-973000 --format={{.State.Status}}
	I1107 15:17:28.431280    5030 ssh_runner.go:195] Run: systemctl --version
	I1107 15:17:28.431380    5030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:17:28.483112    5030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50479 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/ingress-addon-legacy-973000/id_rsa Username:docker}
	I1107 15:17:28.566699    5030 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1107 15:17:28.606755    5030 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1107 15:17:28.627787    5030 config.go:182] Loaded profile config "ingress-addon-legacy-973000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1107 15:17:28.627813    5030 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-973000"
	I1107 15:17:28.627824    5030 addons.go:231] Setting addon ingress-dns=true in "ingress-addon-legacy-973000"
	I1107 15:17:28.627873    5030 host.go:66] Checking if "ingress-addon-legacy-973000" exists ...
	I1107 15:17:28.628473    5030 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-973000 --format={{.State.Status}}
	I1107 15:17:28.702000    5030 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1107 15:17:28.723304    5030 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1107 15:17:28.744565    5030 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1107 15:17:28.744599    5030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1107 15:17:28.744735    5030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-973000
	I1107 15:17:28.796214    5030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50479 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/ingress-addon-legacy-973000/id_rsa Username:docker}
	I1107 15:17:28.890189    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:17:28.940001    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:28.940025    5030 retry.go:31] will retry after 160.073373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:29.101662    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:17:29.172945    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:29.172996    5030 retry.go:31] will retry after 242.469007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:29.416609    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:17:29.501954    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:29.501977    5030 retry.go:31] will retry after 736.951115ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:30.239453    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:17:30.289418    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:30.289442    5030 retry.go:31] will retry after 1.058555134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:31.348713    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:17:31.399843    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:31.399864    5030 retry.go:31] will retry after 971.841604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:32.372297    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:17:32.423741    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:32.423761    5030 retry.go:31] will retry after 2.303707531s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:34.727607    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:17:34.776969    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:34.776987    5030 retry.go:31] will retry after 2.700810691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:37.480034    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:17:37.527838    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:37.527862    5030 retry.go:31] will retry after 3.407076024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:40.935561    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:17:40.992557    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:40.992580    5030 retry.go:31] will retry after 5.547310236s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:46.541595    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:17:46.593127    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:46.593153    5030 retry.go:31] will retry after 6.301778763s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:52.895073    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:17:52.946635    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:17:52.946653    5030 retry.go:31] will retry after 12.595977384s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:18:05.542983    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:18:05.593621    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:18:05.593641    5030 retry.go:31] will retry after 14.328037823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:18:19.922614    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:18:19.972633    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:18:19.972654    5030 retry.go:31] will retry after 37.393027759s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:18:57.366183    5030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1107 15:18:57.428334    5030 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1107 15:18:57.450119    5030 out.go:177] 
	W1107 15:18:57.470550    5030 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1107 15:18:57.470570    5030 out.go:239] * 
	* 
	W1107 15:18:57.472994    5030 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 15:18:57.493708    5030 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-973000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-973000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875",
	        "Created": "2023-11-07T23:11:28.707079906Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:11:28.92857715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dbc648475405a75e8c472743ce721cb0b74db98d9501831a17a27a54e2bd3e47",
	        "ResolvConfPath": "/var/lib/docker/containers/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875/hostname",
	        "HostsPath": "/var/lib/docker/containers/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875/hosts",
	        "LogPath": "/var/lib/docker/containers/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875-json.log",
	        "Name": "/ingress-addon-legacy-973000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-973000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-973000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/77f124f5a78552e7f54c20b7bf36580f1c4eef9af05a1b50d00a7bf8b9a6e849-init/diff:/var/lib/docker/overlay2/ccb94ba9de0d290b85c0dabc7d56c5897d096ed52875ea597c5a32f47fbd6d5e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77f124f5a78552e7f54c20b7bf36580f1c4eef9af05a1b50d00a7bf8b9a6e849/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77f124f5a78552e7f54c20b7bf36580f1c4eef9af05a1b50d00a7bf8b9a6e849/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77f124f5a78552e7f54c20b7bf36580f1c4eef9af05a1b50d00a7bf8b9a6e849/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-973000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-973000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-973000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-973000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-973000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc9b5d0a0dd50fcdb48faa0793c410d694f64a07ed1c11ea33ba4909d711f796",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50479"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50475"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50476"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50477"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50478"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cc9b5d0a0dd5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-973000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9d4218765f63",
	                        "ingress-addon-legacy-973000"
	                    ],
	                    "NetworkID": "329230eb71db3c6da5787d3a5a7b78591799f8ed084f50a57f1438cf543d2e40",
	                    "EndpointID": "7e52dd87d9687ec9ac5dc5889073c955dc3d834c2f673a85a26666c3d46dbc9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-973000 -n ingress-addon-legacy-973000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-973000 -n ingress-addon-legacy-973000: exit status 6 (366.987656ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:18:57.925373    5061 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-973000" does not appear in /Users/jenkins/minikube-integration/17585-1518/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-973000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:200: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-973000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-973000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875",
	        "Created": "2023-11-07T23:11:28.707079906Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:11:28.92857715Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dbc648475405a75e8c472743ce721cb0b74db98d9501831a17a27a54e2bd3e47",
	        "ResolvConfPath": "/var/lib/docker/containers/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875/hostname",
	        "HostsPath": "/var/lib/docker/containers/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875/hosts",
	        "LogPath": "/var/lib/docker/containers/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875/9d4218765f63c3007c06f41dc7d818e52bd12b293672befc454f3fcb4ee4c875-json.log",
	        "Name": "/ingress-addon-legacy-973000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-973000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-973000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/77f124f5a78552e7f54c20b7bf36580f1c4eef9af05a1b50d00a7bf8b9a6e849-init/diff:/var/lib/docker/overlay2/ccb94ba9de0d290b85c0dabc7d56c5897d096ed52875ea597c5a32f47fbd6d5e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77f124f5a78552e7f54c20b7bf36580f1c4eef9af05a1b50d00a7bf8b9a6e849/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77f124f5a78552e7f54c20b7bf36580f1c4eef9af05a1b50d00a7bf8b9a6e849/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77f124f5a78552e7f54c20b7bf36580f1c4eef9af05a1b50d00a7bf8b9a6e849/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-973000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-973000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-973000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-973000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-973000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc9b5d0a0dd50fcdb48faa0793c410d694f64a07ed1c11ea33ba4909d711f796",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50479"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50475"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50476"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50477"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50478"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cc9b5d0a0dd5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-973000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9d4218765f63",
	                        "ingress-addon-legacy-973000"
	                    ],
	                    "NetworkID": "329230eb71db3c6da5787d3a5a7b78591799f8ed084f50a57f1438cf543d2e40",
	                    "EndpointID": "7e52dd87d9687ec9ac5dc5889073c955dc3d834c2f673a85a26666c3d46dbc9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-973000 -n ingress-addon-legacy-973000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-973000 -n ingress-addon-legacy-973000: exit status 6 (371.931428ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:18:58.349878    5073 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-973000" does not appear in /Users/jenkins/minikube-integration/17585-1518/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-973000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (885.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-552000 ssh -- ls /minikube-host
E1107 15:23:43.720066    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:24:06.573320    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:25:06.771639    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:28:43.715635    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:29:06.566920    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:30:29.619852    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:33:43.710059    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:34:06.561057    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-552000 ssh -- ls /minikube-host: signal: killed (14m44.93110043s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-552000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountSecond]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-552000
helpers_test.go:235: (dbg) docker inspect mount-start-2-552000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ef61021ddee92b100c045da3d02bd366442f9292f479f9597dc2abd36e1403a8",
	        "Created": "2023-11-07T23:22:46.147694185Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 99025,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-07T23:22:46.406095488Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:dbc648475405a75e8c472743ce721cb0b74db98d9501831a17a27a54e2bd3e47",
	        "ResolvConfPath": "/var/lib/docker/containers/ef61021ddee92b100c045da3d02bd366442f9292f479f9597dc2abd36e1403a8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ef61021ddee92b100c045da3d02bd366442f9292f479f9597dc2abd36e1403a8/hostname",
	        "HostsPath": "/var/lib/docker/containers/ef61021ddee92b100c045da3d02bd366442f9292f479f9597dc2abd36e1403a8/hosts",
	        "LogPath": "/var/lib/docker/containers/ef61021ddee92b100c045da3d02bd366442f9292f479f9597dc2abd36e1403a8/ef61021ddee92b100c045da3d02bd366442f9292f479f9597dc2abd36e1403a8-json.log",
	        "Name": "/mount-start-2-552000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-552000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-552000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f72f97a65b37596a5c76f2ec913cd5a7042f9f280d138a71cb5efe82aec653a4-init/diff:/var/lib/docker/overlay2/ccb94ba9de0d290b85c0dabc7d56c5897d096ed52875ea597c5a32f47fbd6d5e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f72f97a65b37596a5c76f2ec913cd5a7042f9f280d138a71cb5efe82aec653a4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f72f97a65b37596a5c76f2ec913cd5a7042f9f280d138a71cb5efe82aec653a4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f72f97a65b37596a5c76f2ec913cd5a7042f9f280d138a71cb5efe82aec653a4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-552000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-552000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-552000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-552000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-552000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b79b341a8696c0f5cb9861d779790f39318b281a8a856217a7f796c8f8ff7a0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50761"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50762"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50763"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50764"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50765"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7b79b341a869",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-552000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ef61021ddee9",
	                        "mount-start-2-552000"
	                    ],
	                    "NetworkID": "ec924f5cdfdec1505ef4f63baba182841d7d398206d0299d55424eaf5c7dc089",
	                    "EndpointID": "cc8515b089e8f3a9603a44f840f2c9409de8fafd8917ba14023937350730f400",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-552000 -n mount-start-2-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-552000 -n mount-start-2-552000: exit status 6 (370.641384ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:37:37.074323    6891 status.go:415] kubeconfig endpoint: extract IP: "mount-start-2-552000" does not appear in /Users/jenkins/minikube-integration/17585-1518/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-552000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountSecond (885.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (754.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-985000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1107 15:39:06.605990    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:41:46.806264    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:43:43.749329    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:44:06.600495    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:47:09.653402    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:48:43.741327    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:49:06.592559    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-985000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m34.036841481s)

                                                
                                                
-- stdout --
	* [multinode-985000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node multinode-985000 in cluster multinode-985000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-985000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 15:38:47.824879    7015 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:38:47.825081    7015 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:38:47.825087    7015 out.go:309] Setting ErrFile to fd 2...
	I1107 15:38:47.825091    7015 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:38:47.825271    7015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 15:38:47.826734    7015 out.go:303] Setting JSON to false
	I1107 15:38:47.849193    7015 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4101,"bootTime":1699396226,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1107 15:38:47.849298    7015 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1107 15:38:47.870768    7015 out.go:177] * [multinode-985000] minikube v1.32.0 on Darwin 14.1
	I1107 15:38:47.891858    7015 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 15:38:47.892032    7015 notify.go:220] Checking for updates...
	I1107 15:38:47.912812    7015 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	I1107 15:38:47.934499    7015 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 15:38:47.976622    7015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 15:38:48.018808    7015 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	I1107 15:38:48.039750    7015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 15:38:48.061173    7015 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 15:38:48.117742    7015 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.0 (126437)
	I1107 15:38:48.117892    7015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 15:38:48.221989    7015 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-11-07 23:38:48.161067662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 15:38:48.243803    7015 out.go:177] * Using the docker driver based on user configuration
	I1107 15:38:48.285377    7015 start.go:298] selected driver: docker
	I1107 15:38:48.285425    7015 start.go:902] validating driver "docker" against <nil>
	I1107 15:38:48.285443    7015 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 15:38:48.289545    7015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 15:38:48.391243    7015 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:false NGoroutines:68 SystemTime:2023-11-07 23:38:48.33014914 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=un
confined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:
Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Doc
ker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 15:38:48.391425    7015 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 15:38:48.391624    7015 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 15:38:48.412877    7015 out.go:177] * Using Docker Desktop driver with root privileges
	I1107 15:38:48.434053    7015 cni.go:84] Creating CNI manager for ""
	I1107 15:38:48.434088    7015 cni.go:136] 0 nodes found, recommending kindnet
	I1107 15:38:48.434108    7015 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1107 15:38:48.434128    7015 start_flags.go:323] config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-985000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 15:38:48.455829    7015 out.go:177] * Starting control plane node multinode-985000 in cluster multinode-985000
	I1107 15:38:48.497993    7015 cache.go:121] Beginning downloading kic base image for docker with docker
	I1107 15:38:48.520797    7015 out.go:177] * Pulling base image ...
	I1107 15:38:48.563832    7015 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 15:38:48.563905    7015 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1107 15:38:48.563923    7015 cache.go:56] Caching tarball of preloaded images
	I1107 15:38:48.563937    7015 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 15:38:48.564148    7015 preload.go:174] Found /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 15:38:48.564167    7015 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1107 15:38:48.565783    7015 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/multinode-985000/config.json ...
	I1107 15:38:48.565921    7015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/multinode-985000/config.json: {Name:mk0301e19ab7b0aa11bab4087eb12bcb8baef8f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 15:38:48.617511    7015 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 15:38:48.617523    7015 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 15:38:48.617539    7015 cache.go:194] Successfully downloaded all kic artifacts
	I1107 15:38:48.617577    7015 start.go:365] acquiring machines lock for multinode-985000: {Name:mk708b1ebc8aeef69ffae97883f5d94723698aef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 15:38:48.617734    7015 start.go:369] acquired machines lock for "multinode-985000" in 142.362µs
	I1107 15:38:48.617765    7015 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-985000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1107 15:38:48.617867    7015 start.go:125] createHost starting for "" (driver="docker")
	I1107 15:38:48.641598    7015 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1107 15:38:48.641927    7015 start.go:159] libmachine.API.Create for "multinode-985000" (driver="docker")
	I1107 15:38:48.641975    7015 client.go:168] LocalClient.Create starting
	I1107 15:38:48.642155    7015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 15:38:48.642249    7015 main.go:141] libmachine: Decoding PEM data...
	I1107 15:38:48.642309    7015 main.go:141] libmachine: Parsing certificate...
	I1107 15:38:48.642425    7015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 15:38:48.642496    7015 main.go:141] libmachine: Decoding PEM data...
	I1107 15:38:48.642513    7015 main.go:141] libmachine: Parsing certificate...
	I1107 15:38:48.643374    7015 cli_runner.go:164] Run: docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 15:38:48.694226    7015 cli_runner.go:211] docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 15:38:48.694324    7015 network_create.go:281] running [docker network inspect multinode-985000] to gather additional debugging logs...
	I1107 15:38:48.694342    7015 cli_runner.go:164] Run: docker network inspect multinode-985000
	W1107 15:38:48.744471    7015 cli_runner.go:211] docker network inspect multinode-985000 returned with exit code 1
	I1107 15:38:48.744503    7015 network_create.go:284] error running [docker network inspect multinode-985000]: docker network inspect multinode-985000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-985000 not found
	I1107 15:38:48.744516    7015 network_create.go:286] output of [docker network inspect multinode-985000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-985000 not found
	
	** /stderr **
	I1107 15:38:48.744632    7015 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 15:38:48.797199    7015 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 15:38:48.797597    7015 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022ebae0}
	I1107 15:38:48.797614    7015 network_create.go:124] attempt to create docker network multinode-985000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1107 15:38:48.797680    7015 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-985000 multinode-985000
	I1107 15:38:48.883752    7015 network_create.go:108] docker network multinode-985000 192.168.58.0/24 created
	I1107 15:38:48.883790    7015 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-985000" container
	I1107 15:38:48.883901    7015 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 15:38:48.934815    7015 cli_runner.go:164] Run: docker volume create multinode-985000 --label name.minikube.sigs.k8s.io=multinode-985000 --label created_by.minikube.sigs.k8s.io=true
	I1107 15:38:48.986285    7015 oci.go:103] Successfully created a docker volume multinode-985000
	I1107 15:38:48.986408    7015 cli_runner.go:164] Run: docker run --rm --name multinode-985000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-985000 --entrypoint /usr/bin/test -v multinode-985000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 15:38:49.361482    7015 oci.go:107] Successfully prepared a docker volume multinode-985000
	I1107 15:38:49.361528    7015 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 15:38:49.361539    7015 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 15:38:49.361665    7015 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-985000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 15:44:48.634715    7015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 15:44:48.634863    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:44:48.689773    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:44:48.689902    7015 retry.go:31] will retry after 292.500331ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:48.984769    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:44:49.038785    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:44:49.038880    7015 retry.go:31] will retry after 315.265888ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:49.355711    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:44:49.409724    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:44:49.409825    7015 retry.go:31] will retry after 614.572469ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:50.024744    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:44:50.077703    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 15:44:50.077806    7015 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 15:44:50.077828    7015 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:50.077892    7015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 15:44:50.077948    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:44:50.129268    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:44:50.129361    7015 retry.go:31] will retry after 160.339138ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:50.292096    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:44:50.343674    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:44:50.343763    7015 retry.go:31] will retry after 355.981987ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:50.700944    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:44:50.753715    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:44:50.753805    7015 retry.go:31] will retry after 838.150433ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:51.592972    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:44:51.647002    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 15:44:51.647110    7015 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 15:44:51.647125    7015 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:51.647138    7015 start.go:128] duration metric: createHost completed in 6m3.037201268s
	I1107 15:44:51.647145    7015 start.go:83] releasing machines lock for "multinode-985000", held for 6m3.037346546s
	W1107 15:44:51.647158    7015 start.go:691] error starting host: creating host: create host timed out in 360.000000 seconds
	I1107 15:44:51.647603    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:44:51.697211    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:44:51.697257    7015 delete.go:82] Unable to get host status for multinode-985000, assuming it has already been deleted: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	W1107 15:44:51.697334    7015 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1107 15:44:51.697344    7015 start.go:706] Will try again in 5 seconds ...
	I1107 15:44:56.699302    7015 start.go:365] acquiring machines lock for multinode-985000: {Name:mk708b1ebc8aeef69ffae97883f5d94723698aef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 15:44:56.699574    7015 start.go:369] acquired machines lock for "multinode-985000" in 158.51µs
	I1107 15:44:56.699610    7015 start.go:96] Skipping create...Using existing machine configuration
	I1107 15:44:56.699625    7015 fix.go:54] fixHost starting: 
	I1107 15:44:56.700133    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:44:56.752171    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:44:56.752228    7015 fix.go:102] recreateIfNeeded on multinode-985000: state= err=unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:56.752250    7015 fix.go:107] machineExists: false. err=machine does not exist
	I1107 15:44:56.773718    7015 out.go:177] * docker "multinode-985000" container is missing, will recreate.
	I1107 15:44:56.815748    7015 delete.go:124] DEMOLISHING multinode-985000 ...
	I1107 15:44:56.815979    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:44:56.869163    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	W1107 15:44:56.869204    7015 stop.go:75] unable to get state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:56.869222    7015 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:56.869614    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:44:56.918890    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:44:56.918945    7015 delete.go:82] Unable to get host status for multinode-985000, assuming it has already been deleted: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:56.919033    7015 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-985000
	W1107 15:44:56.968873    7015 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-985000 returned with exit code 1
	I1107 15:44:56.968913    7015 kic.go:371] could not find the container multinode-985000 to remove it. will try anyways
	I1107 15:44:56.968995    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:44:57.018594    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	W1107 15:44:57.018633    7015 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:57.018716    7015 cli_runner.go:164] Run: docker exec --privileged -t multinode-985000 /bin/bash -c "sudo init 0"
	W1107 15:44:57.068724    7015 cli_runner.go:211] docker exec --privileged -t multinode-985000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1107 15:44:57.068760    7015 oci.go:650] error shutdown multinode-985000: docker exec --privileged -t multinode-985000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:58.069161    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:44:58.123110    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:44:58.123155    7015 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:58.123163    7015 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:44:58.123190    7015 retry.go:31] will retry after 499.480109ms: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:58.622976    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:44:58.677581    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:44:58.677625    7015 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:58.677636    7015 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:44:58.677659    7015 retry.go:31] will retry after 722.202405ms: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:59.402256    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:44:59.453376    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:44:59.453426    7015 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:44:59.453437    7015 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:44:59.453459    7015 retry.go:31] will retry after 1.391206344s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:45:00.845574    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:45:00.899941    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:45:00.899982    7015 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:45:00.899992    7015 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:45:00.900022    7015 retry.go:31] will retry after 1.874758148s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:45:02.775331    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:45:02.846939    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:45:02.846991    7015 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:45:02.847007    7015 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:45:02.847030    7015 retry.go:31] will retry after 2.255434628s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:45:05.102999    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:45:05.156728    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:45:05.156777    7015 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:45:05.156838    7015 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:45:05.156861    7015 retry.go:31] will retry after 3.974885004s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:45:09.133967    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:45:09.188905    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:45:09.188948    7015 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:45:09.188957    7015 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:45:09.188980    7015 retry.go:31] will retry after 4.734183091s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:45:13.923636    7015 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:45:13.978349    7015 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:45:13.978390    7015 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:45:13.978401    7015 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:45:13.978429    7015 oci.go:88] couldn't shut down multinode-985000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	 
	I1107 15:45:13.978530    7015 cli_runner.go:164] Run: docker rm -f -v multinode-985000
	I1107 15:45:14.029519    7015 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-985000
	W1107 15:45:14.078901    7015 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-985000 returned with exit code 1
	I1107 15:45:14.079008    7015 cli_runner.go:164] Run: docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 15:45:14.130808    7015 cli_runner.go:164] Run: docker network rm multinode-985000
	I1107 15:45:14.221719    7015 fix.go:114] Sleeping 1 second for extra luck!
	I1107 15:45:15.223985    7015 start.go:125] createHost starting for "" (driver="docker")
	I1107 15:45:15.245811    7015 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1107 15:45:15.245949    7015 start.go:159] libmachine.API.Create for "multinode-985000" (driver="docker")
	I1107 15:45:15.245972    7015 client.go:168] LocalClient.Create starting
	I1107 15:45:15.246117    7015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 15:45:15.246183    7015 main.go:141] libmachine: Decoding PEM data...
	I1107 15:45:15.246200    7015 main.go:141] libmachine: Parsing certificate...
	I1107 15:45:15.246263    7015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 15:45:15.246310    7015 main.go:141] libmachine: Decoding PEM data...
	I1107 15:45:15.246326    7015 main.go:141] libmachine: Parsing certificate...
	I1107 15:45:15.267294    7015 cli_runner.go:164] Run: docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 15:45:15.320901    7015 cli_runner.go:211] docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 15:45:15.320998    7015 network_create.go:281] running [docker network inspect multinode-985000] to gather additional debugging logs...
	I1107 15:45:15.321019    7015 cli_runner.go:164] Run: docker network inspect multinode-985000
	W1107 15:45:15.371132    7015 cli_runner.go:211] docker network inspect multinode-985000 returned with exit code 1
	I1107 15:45:15.371158    7015 network_create.go:284] error running [docker network inspect multinode-985000]: docker network inspect multinode-985000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-985000 not found
	I1107 15:45:15.371168    7015 network_create.go:286] output of [docker network inspect multinode-985000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-985000 not found
	
	** /stderr **
	I1107 15:45:15.371291    7015 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 15:45:15.423042    7015 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 15:45:15.424468    7015 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 15:45:15.424815    7015 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0007019c0}
	I1107 15:45:15.424829    7015 network_create.go:124] attempt to create docker network multinode-985000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1107 15:45:15.424897    7015 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-985000 multinode-985000
	W1107 15:45:15.474908    7015 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-985000 multinode-985000 returned with exit code 1
	W1107 15:45:15.474953    7015 network_create.go:149] failed to create docker network multinode-985000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-985000 multinode-985000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1107 15:45:15.474971    7015 network_create.go:116] failed to create docker network multinode-985000 192.168.67.0/24, will retry: subnet is taken
	I1107 15:45:15.476446    7015 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 15:45:15.476906    7015 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00049d610}
	I1107 15:45:15.476922    7015 network_create.go:124] attempt to create docker network multinode-985000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1107 15:45:15.476990    7015 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-985000 multinode-985000
	I1107 15:45:15.563426    7015 network_create.go:108] docker network multinode-985000 192.168.76.0/24 created
	I1107 15:45:15.563473    7015 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-985000" container
	I1107 15:45:15.563579    7015 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 15:45:15.613965    7015 cli_runner.go:164] Run: docker volume create multinode-985000 --label name.minikube.sigs.k8s.io=multinode-985000 --label created_by.minikube.sigs.k8s.io=true
	I1107 15:45:15.663036    7015 oci.go:103] Successfully created a docker volume multinode-985000
	I1107 15:45:15.663159    7015 cli_runner.go:164] Run: docker run --rm --name multinode-985000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-985000 --entrypoint /usr/bin/test -v multinode-985000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 15:45:15.946627    7015 oci.go:107] Successfully prepared a docker volume multinode-985000
	I1107 15:45:15.946657    7015 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 15:45:15.946668    7015 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 15:45:15.946765    7015 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-985000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 15:51:15.238269    7015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 15:51:15.238394    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:15.292079    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:51:15.292189    7015 retry.go:31] will retry after 334.136108ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:15.628773    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:15.684118    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:51:15.684231    7015 retry.go:31] will retry after 475.70378ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:16.162325    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:16.216098    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:51:16.216194    7015 retry.go:31] will retry after 815.280024ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:17.031902    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:17.086185    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 15:51:17.086284    7015 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 15:51:17.086303    7015 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:17.086361    7015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 15:51:17.086414    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:17.136506    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:51:17.136616    7015 retry.go:31] will retry after 197.603466ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:17.334965    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:17.390412    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:51:17.390509    7015 retry.go:31] will retry after 507.23983ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:17.898625    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:17.953213    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:51:17.953304    7015 retry.go:31] will retry after 420.852201ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:18.374753    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:18.427315    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 15:51:18.427419    7015 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 15:51:18.427439    7015 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:18.427450    7015 start.go:128] duration metric: createHost completed in 6m3.211641251s
	I1107 15:51:18.427513    7015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 15:51:18.427569    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:18.476972    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:51:18.477062    7015 retry.go:31] will retry after 218.571819ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:18.698039    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:18.751366    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:51:18.751460    7015 retry.go:31] will retry after 365.367824ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:19.119170    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:19.174123    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:51:19.174215    7015 retry.go:31] will retry after 687.157059ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:19.862213    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:19.915748    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 15:51:19.915842    7015 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 15:51:19.915869    7015 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:19.915928    7015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 15:51:19.915979    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:19.965486    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:51:19.965584    7015 retry.go:31] will retry after 323.023448ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:20.290945    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:20.344089    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:51:20.344177    7015 retry.go:31] will retry after 348.507744ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:20.693318    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:20.747231    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:51:20.747321    7015 retry.go:31] will retry after 837.745378ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:21.587104    7015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:51:21.639563    7015 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 15:51:21.639675    7015 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 15:51:21.639690    7015 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:51:21.639704    7015 fix.go:56] fixHost completed within 6m24.948770134s
	I1107 15:51:21.639711    7015 start.go:83] releasing machines lock for "multinode-985000", held for 6m24.948813836s
	W1107 15:51:21.639792    7015 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-985000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-985000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1107 15:51:21.683349    7015 out.go:177] 
	W1107 15:51:21.705335    7015 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1107 15:51:21.705375    7015 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1107 15:51:21.705390    7015 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1107 15:51:21.726180    7015 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-985000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985000
helpers_test.go:235: (dbg) docker inspect multinode-985000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-985000",
	        "Id": "63fdacf909e2e1e299ae5b8324c6490351f39723350dc9d23921d4f19e963223",
	        "Created": "2023-11-07T23:45:15.52482752Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-985000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 7 (108.07665ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:51:21.963790    7333 status.go:249] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (754.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (112.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (91.034856ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-985000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- rollout status deployment/busybox: exit status 1 (92.608383ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.378014ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.808849ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.0285ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (149.339639ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.24115ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.695667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.253536ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (97.842737ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.417256ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.898814ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.855096ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (92.406467ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec  -- nslookup kubernetes.io: exit status 1 (92.098545ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec  -- nslookup kubernetes.default: exit status 1 (92.145838ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (91.867781ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985000
helpers_test.go:235: (dbg) docker inspect multinode-985000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-985000",
	        "Id": "63fdacf909e2e1e299ae5b8324c6490351f39723350dc9d23921d4f19e963223",
	        "Created": "2023-11-07T23:45:15.52482752Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-985000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 7 (107.194473ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:53:14.865526    7418 status.go:249] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (112.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (91.071887ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-985000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985000
helpers_test.go:235: (dbg) docker inspect multinode-985000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-985000",
	        "Id": "63fdacf909e2e1e299ae5b8324c6490351f39723350dc9d23921d4f19e963223",
	        "Created": "2023-11-07T23:45:15.52482752Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-985000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 7 (107.750265ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:53:15.119144    7427 status.go:249] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-985000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-985000 -v 3 --alsologtostderr: exit status 80 (198.408614ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 15:53:15.175139    7431 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:53:15.175509    7431 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:53:15.175514    7431 out.go:309] Setting ErrFile to fd 2...
	I1107 15:53:15.175518    7431 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:53:15.175712    7431 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 15:53:15.176037    7431 mustload.go:65] Loading cluster: multinode-985000
	I1107 15:53:15.176325    7431 config.go:182] Loaded profile config "multinode-985000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 15:53:15.176692    7431 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:15.226225    7431 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:15.248689    7431 out.go:177] 
	W1107 15:53:15.271005    7431 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 15:53:15.271032    7431 out.go:239] * 
	* 
	W1107 15:53:15.274782    7431 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 15:53:15.295924    7431 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-985000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985000
helpers_test.go:235: (dbg) docker inspect multinode-985000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-985000",
	        "Id": "63fdacf909e2e1e299ae5b8324c6490351f39723350dc9d23921d4f19e963223",
	        "Created": "2023-11-07T23:45:15.52482752Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-985000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 7 (106.490068ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:53:15.479474    7437 status.go:249] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:155: expected profile "multinode-985000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-552000\",\"Status\":\"\",\"Config\":null,\"Active\":false}],\"valid\":[{\"Name\":\"multinode-985000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-985000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServ
erPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.3\",\"ClusterName\":\"multinode-985000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"Kubernete
sVersion\":\"v1.28.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":6000
0000000,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985000
helpers_test.go:235: (dbg) docker inspect multinode-985000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-985000",
	        "Id": "63fdacf909e2e1e299ae5b8324c6490351f39723350dc9d23921d4f19e963223",
	        "Created": "2023-11-07T23:45:15.52482752Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-985000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 7 (106.095243ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:53:15.818911    7449 status.go:249] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status --output json --alsologtostderr: exit status 7 (107.160125ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-985000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 15:53:15.875267    7453 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:53:15.875456    7453 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:53:15.875461    7453 out.go:309] Setting ErrFile to fd 2...
	I1107 15:53:15.875467    7453 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:53:15.875649    7453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 15:53:15.875845    7453 out.go:303] Setting JSON to true
	I1107 15:53:15.875867    7453 mustload.go:65] Loading cluster: multinode-985000
	I1107 15:53:15.875898    7453 notify.go:220] Checking for updates...
	I1107 15:53:15.876153    7453 config.go:182] Loaded profile config "multinode-985000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 15:53:15.876169    7453 status.go:255] checking status of multinode-985000 ...
	I1107 15:53:15.876567    7453 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:15.926317    7453 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:15.926363    7453 status.go:330] multinode-985000 host status = "" (err=state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	)
	I1107 15:53:15.926382    7453 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1107 15:53:15.926395    7453 status.go:260] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	E1107 15:53:15.926409    7453 status.go:263] The "multinode-985000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-985000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985000
helpers_test.go:235: (dbg) docker inspect multinode-985000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-985000",
	        "Id": "63fdacf909e2e1e299ae5b8324c6490351f39723350dc9d23921d4f19e963223",
	        "Created": "2023-11-07T23:45:15.52482752Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-985000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 7 (105.764236ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:53:16.086366    7459 status.go:249] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 node stop m03: exit status 85 (145.764608ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-985000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status: exit status 7 (159.910763ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:53:16.392703    7465 status.go:260] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	E1107 15:53:16.392712    7465 status.go:263] The "multinode-985000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr: exit status 7 (108.301683ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 15:53:16.449293    7469 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:53:16.449580    7469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:53:16.449586    7469 out.go:309] Setting ErrFile to fd 2...
	I1107 15:53:16.449591    7469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:53:16.449765    7469 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 15:53:16.449943    7469 out.go:303] Setting JSON to false
	I1107 15:53:16.449965    7469 mustload.go:65] Loading cluster: multinode-985000
	I1107 15:53:16.450006    7469 notify.go:220] Checking for updates...
	I1107 15:53:16.450236    7469 config.go:182] Loaded profile config "multinode-985000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 15:53:16.450249    7469 status.go:255] checking status of multinode-985000 ...
	I1107 15:53:16.450632    7469 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:16.501012    7469 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:16.501061    7469 status.go:330] multinode-985000 host status = "" (err=state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	)
	I1107 15:53:16.501081    7469 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1107 15:53:16.501096    7469 status.go:260] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	E1107 15:53:16.501104    7469 status.go:263] The "multinode-985000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr": multinode-985000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:233: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr": multinode-985000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:237: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr": multinode-985000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985000
helpers_test.go:235: (dbg) docker inspect multinode-985000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-985000",
	        "Id": "63fdacf909e2e1e299ae5b8324c6490351f39723350dc9d23921d4f19e963223",
	        "Created": "2023-11-07T23:45:15.52482752Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-985000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 7 (106.952636ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:53:16.662506    7475 status.go:249] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 node start m03 --alsologtostderr: exit status 85 (147.043759ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 15:53:16.773206    7481 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:53:16.773522    7481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:53:16.773527    7481 out.go:309] Setting ErrFile to fd 2...
	I1107 15:53:16.773531    7481 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:53:16.773719    7481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 15:53:16.774051    7481 mustload.go:65] Loading cluster: multinode-985000
	I1107 15:53:16.774347    7481 config.go:182] Loaded profile config "multinode-985000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 15:53:16.796296    7481 out.go:177] 
	W1107 15:53:16.817268    7481 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1107 15:53:16.817295    7481 out.go:239] * 
	* 
	W1107 15:53:16.820781    7481 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1107 15:53:16.842318    7481 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I1107 15:53:16.773206    7481 out.go:296] Setting OutFile to fd 1 ...
I1107 15:53:16.773522    7481 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 15:53:16.773527    7481 out.go:309] Setting ErrFile to fd 2...
I1107 15:53:16.773531    7481 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 15:53:16.773719    7481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
I1107 15:53:16.774051    7481 mustload.go:65] Loading cluster: multinode-985000
I1107 15:53:16.774347    7481 config.go:182] Loaded profile config "multinode-985000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 15:53:16.796296    7481 out.go:177] 
W1107 15:53:16.817268    7481 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1107 15:53:16.817295    7481 out.go:239] * 
* 
W1107 15:53:16.820781    7481 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1107 15:53:16.842318    7481 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-985000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status: exit status 7 (107.303576ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:53:16.972759    7483 status.go:260] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	E1107 15:53:16.972770    7483 status.go:263] The "multinode-985000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-985000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985000
helpers_test.go:235: (dbg) docker inspect multinode-985000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-985000",
	        "Id": "63fdacf909e2e1e299ae5b8324c6490351f39723350dc9d23921d4f19e963223",
	        "Created": "2023-11-07T23:45:15.52482752Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-985000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 7 (107.704205ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 15:53:17.134177    7489 status.go:249] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (786.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-985000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-985000
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-985000: exit status 82 (12.079998406s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-985000"  ...
	* Stopping node "multinode-985000"  ...
	* Stopping node "multinode-985000"  ...
	* Stopping node "multinode-985000"  ...
	* Stopping node "multinode-985000"  ...
	* Stopping node "multinode-985000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-985000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-985000" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-985000 --wait=true -v=8 --alsologtostderr
E1107 15:53:43.735857    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:54:06.663916    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:58:26.863850    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:58:43.805604    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:59:06.659088    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 16:03:43.800828    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:03:49.711270    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 16:04:06.652406    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-985000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m53.639383051s)

                                                
                                                
-- stdout --
	* [multinode-985000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-985000 in cluster multinode-985000
	* Pulling base image ...
	* docker "multinode-985000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-985000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 15:53:29.325985    7511 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:53:29.326265    7511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:53:29.326270    7511 out.go:309] Setting ErrFile to fd 2...
	I1107 15:53:29.326275    7511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:53:29.326471    7511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 15:53:29.327893    7511 out.go:303] Setting JSON to false
	I1107 15:53:29.350366    7511 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4983,"bootTime":1699396226,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1107 15:53:29.350472    7511 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1107 15:53:29.372307    7511 out.go:177] * [multinode-985000] minikube v1.32.0 on Darwin 14.1
	I1107 15:53:29.394150    7511 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 15:53:29.394254    7511 notify.go:220] Checking for updates...
	I1107 15:53:29.436992    7511 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	I1107 15:53:29.458893    7511 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 15:53:29.480052    7511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 15:53:29.503103    7511 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	I1107 15:53:29.523990    7511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 15:53:29.545536    7511 config.go:182] Loaded profile config "multinode-985000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 15:53:29.545688    7511 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 15:53:29.603326    7511 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.0 (126437)
	I1107 15:53:29.603458    7511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 15:53:29.704324    7511 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2023-11-07 23:53:29.694548333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 15:53:29.747354    7511 out.go:177] * Using the docker driver based on existing profile
	I1107 15:53:29.768230    7511 start.go:298] selected driver: docker
	I1107 15:53:29.768258    7511 start.go:902] validating driver "docker" against &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-985000 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 15:53:29.768363    7511 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 15:53:29.768583    7511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 15:53:29.867337    7511 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2023-11-07 23:53:29.858588387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 15:53:29.870457    7511 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 15:53:29.870531    7511 cni.go:84] Creating CNI manager for ""
	I1107 15:53:29.870541    7511 cni.go:136] 1 nodes found, recommending kindnet
	I1107 15:53:29.870550    7511 start_flags.go:323] config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-985000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 15:53:29.913748    7511 out.go:177] * Starting control plane node multinode-985000 in cluster multinode-985000
	I1107 15:53:29.935769    7511 cache.go:121] Beginning downloading kic base image for docker with docker
	I1107 15:53:29.957686    7511 out.go:177] * Pulling base image ...
	I1107 15:53:29.999741    7511 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 15:53:29.999814    7511 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1107 15:53:29.999835    7511 cache.go:56] Caching tarball of preloaded images
	I1107 15:53:29.999841    7511 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 15:53:30.000035    7511 preload.go:174] Found /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 15:53:30.000054    7511 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1107 15:53:30.000216    7511 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/multinode-985000/config.json ...
	I1107 15:53:30.053860    7511 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 15:53:30.053884    7511 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 15:53:30.053902    7511 cache.go:194] Successfully downloaded all kic artifacts
	I1107 15:53:30.053948    7511 start.go:365] acquiring machines lock for multinode-985000: {Name:mk708b1ebc8aeef69ffae97883f5d94723698aef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 15:53:30.054039    7511 start.go:369] acquired machines lock for "multinode-985000" in 68.17µs
	I1107 15:53:30.054059    7511 start.go:96] Skipping create...Using existing machine configuration
	I1107 15:53:30.054070    7511 fix.go:54] fixHost starting: 
	I1107 15:53:30.054315    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:30.104864    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:30.104923    7511 fix.go:102] recreateIfNeeded on multinode-985000: state= err=unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:30.104942    7511 fix.go:107] machineExists: false. err=machine does not exist
	I1107 15:53:30.126674    7511 out.go:177] * docker "multinode-985000" container is missing, will recreate.
	I1107 15:53:30.148220    7511 delete.go:124] DEMOLISHING multinode-985000 ...
	I1107 15:53:30.148403    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:30.198876    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	W1107 15:53:30.198920    7511 stop.go:75] unable to get state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:30.198947    7511 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:30.199316    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:30.248963    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:30.249015    7511 delete.go:82] Unable to get host status for multinode-985000, assuming it has already been deleted: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:30.249085    7511 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-985000
	W1107 15:53:30.298874    7511 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-985000 returned with exit code 1
	I1107 15:53:30.298907    7511 kic.go:371] could not find the container multinode-985000 to remove it. will try anyways
	I1107 15:53:30.298978    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:30.348708    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	W1107 15:53:30.348750    7511 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:30.348832    7511 cli_runner.go:164] Run: docker exec --privileged -t multinode-985000 /bin/bash -c "sudo init 0"
	W1107 15:53:30.398412    7511 cli_runner.go:211] docker exec --privileged -t multinode-985000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1107 15:53:30.398440    7511 oci.go:650] error shutdown multinode-985000: docker exec --privileged -t multinode-985000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:31.399870    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:31.453699    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:31.453748    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:31.453768    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:53:31.453802    7511 retry.go:31] will retry after 486.569943ms: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:31.941157    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:31.996777    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:31.996821    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:31.996831    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:53:31.996855    7511 retry.go:31] will retry after 782.170768ms: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:32.781042    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:32.833901    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:32.833958    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:32.833970    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:53:32.833999    7511 retry.go:31] will retry after 625.319771ms: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:33.461601    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:33.514989    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:33.515029    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:33.515044    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:53:33.515070    7511 retry.go:31] will retry after 878.488081ms: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:34.395928    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:34.449636    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:34.449681    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:34.449693    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:53:34.449716    7511 retry.go:31] will retry after 1.696596742s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:36.148635    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:36.201915    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:36.201959    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:36.201967    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:53:36.201988    7511 retry.go:31] will retry after 5.613094272s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:41.816888    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:41.872726    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:41.872766    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:41.872775    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:53:41.872799    7511 retry.go:31] will retry after 6.287252855s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:48.161639    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 15:53:48.215359    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 15:53:48.215411    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:53:48.215420    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 15:53:48.215445    7511 oci.go:88] couldn't shut down multinode-985000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	 
	I1107 15:53:48.215517    7511 cli_runner.go:164] Run: docker rm -f -v multinode-985000
	I1107 15:53:48.266366    7511 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-985000
	W1107 15:53:48.315855    7511 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-985000 returned with exit code 1
	I1107 15:53:48.315963    7511 cli_runner.go:164] Run: docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 15:53:48.365905    7511 cli_runner.go:164] Run: docker network rm multinode-985000
	I1107 15:53:48.462549    7511 fix.go:114] Sleeping 1 second for extra luck!
	I1107 15:53:49.464710    7511 start.go:125] createHost starting for "" (driver="docker")
	I1107 15:53:49.486730    7511 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1107 15:53:49.486928    7511 start.go:159] libmachine.API.Create for "multinode-985000" (driver="docker")
	I1107 15:53:49.486981    7511 client.go:168] LocalClient.Create starting
	I1107 15:53:49.487161    7511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 15:53:49.487264    7511 main.go:141] libmachine: Decoding PEM data...
	I1107 15:53:49.487300    7511 main.go:141] libmachine: Parsing certificate...
	I1107 15:53:49.487407    7511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 15:53:49.487475    7511 main.go:141] libmachine: Decoding PEM data...
	I1107 15:53:49.487491    7511 main.go:141] libmachine: Parsing certificate...
	I1107 15:53:49.488182    7511 cli_runner.go:164] Run: docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 15:53:49.540769    7511 cli_runner.go:211] docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 15:53:49.540852    7511 network_create.go:281] running [docker network inspect multinode-985000] to gather additional debugging logs...
	I1107 15:53:49.540868    7511 cli_runner.go:164] Run: docker network inspect multinode-985000
	W1107 15:53:49.590589    7511 cli_runner.go:211] docker network inspect multinode-985000 returned with exit code 1
	I1107 15:53:49.590616    7511 network_create.go:284] error running [docker network inspect multinode-985000]: docker network inspect multinode-985000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-985000 not found
	I1107 15:53:49.590628    7511 network_create.go:286] output of [docker network inspect multinode-985000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-985000 not found
	
	** /stderr **
	I1107 15:53:49.590768    7511 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 15:53:49.642166    7511 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 15:53:49.642539    7511 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024de4f0}
	I1107 15:53:49.642553    7511 network_create.go:124] attempt to create docker network multinode-985000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1107 15:53:49.642622    7511 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-985000 multinode-985000
	I1107 15:53:49.729124    7511 network_create.go:108] docker network multinode-985000 192.168.58.0/24 created
	I1107 15:53:49.729162    7511 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-985000" container
	I1107 15:53:49.729298    7511 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 15:53:49.781546    7511 cli_runner.go:164] Run: docker volume create multinode-985000 --label name.minikube.sigs.k8s.io=multinode-985000 --label created_by.minikube.sigs.k8s.io=true
	I1107 15:53:49.831790    7511 oci.go:103] Successfully created a docker volume multinode-985000
	I1107 15:53:49.831903    7511 cli_runner.go:164] Run: docker run --rm --name multinode-985000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-985000 --entrypoint /usr/bin/test -v multinode-985000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 15:53:50.128856    7511 oci.go:107] Successfully prepared a docker volume multinode-985000
	I1107 15:53:50.128902    7511 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 15:53:50.128915    7511 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 15:53:50.129016    7511 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-985000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 15:59:49.557575    7511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 15:59:49.557700    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:49.609574    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:59:49.609689    7511 retry.go:31] will retry after 365.30941ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:49.975856    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:50.030665    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:59:50.030783    7511 retry.go:31] will retry after 535.041064ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:50.568229    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:50.623076    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:59:50.623168    7511 retry.go:31] will retry after 502.432258ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:51.127112    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:51.180423    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 15:59:51.180540    7511 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 15:59:51.180560    7511 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:51.180629    7511 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 15:59:51.180689    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:51.230034    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:59:51.230142    7511 retry.go:31] will retry after 328.967911ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:51.561397    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:51.613129    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:59:51.613221    7511 retry.go:31] will retry after 194.385468ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:51.809912    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:51.864565    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:59:51.864655    7511 retry.go:31] will retry after 687.325921ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:52.553373    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:52.605986    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 15:59:52.606099    7511 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 15:59:52.606119    7511 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:52.606131    7511 start.go:128] duration metric: createHost completed in 6m3.071162838s
	I1107 15:59:52.606197    7511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 15:59:52.606259    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:52.655724    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:59:52.655815    7511 retry.go:31] will retry after 219.148124ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:52.877417    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:52.931653    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:59:52.931745    7511 retry.go:31] will retry after 343.137073ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:53.277243    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:53.330833    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:59:53.330925    7511 retry.go:31] will retry after 465.297568ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:53.798624    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:53.855091    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 15:59:53.855188    7511 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 15:59:53.855205    7511 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:53.855272    7511 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 15:59:53.855332    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:53.905347    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:59:53.905434    7511 retry.go:31] will retry after 234.031506ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:54.141800    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:54.193987    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:59:54.194081    7511 retry.go:31] will retry after 463.075375ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:54.657745    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:54.708665    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 15:59:54.708756    7511 retry.go:31] will retry after 450.234376ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:55.161421    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 15:59:55.213856    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 15:59:55.213955    7511 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 15:59:55.213972    7511 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 15:59:55.213986    7511 fix.go:56] fixHost completed within 6m25.090219961s
	I1107 15:59:55.213994    7511 start.go:83] releasing machines lock for "multinode-985000", held for 6m25.090246947s
	W1107 15:59:55.214008    7511 start.go:691] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W1107 15:59:55.214076    7511 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I1107 15:59:55.214083    7511 start.go:706] Will try again in 5 seconds ...
	I1107 16:00:00.215122    7511 start.go:365] acquiring machines lock for multinode-985000: {Name:mk708b1ebc8aeef69ffae97883f5d94723698aef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 16:00:00.215323    7511 start.go:369] acquired machines lock for "multinode-985000" in 159.299µs
	I1107 16:00:00.215367    7511 start.go:96] Skipping create...Using existing machine configuration
	I1107 16:00:00.215375    7511 fix.go:54] fixHost starting: 
	I1107 16:00:00.215823    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:00:00.268730    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:00:00.268770    7511 fix.go:102] recreateIfNeeded on multinode-985000: state= err=unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:00.268788    7511 fix.go:107] machineExists: false. err=machine does not exist
	I1107 16:00:00.290411    7511 out.go:177] * docker "multinode-985000" container is missing, will recreate.
	I1107 16:00:00.311913    7511 delete.go:124] DEMOLISHING multinode-985000 ...
	I1107 16:00:00.312115    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:00:00.362639    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	W1107 16:00:00.362683    7511 stop.go:75] unable to get state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:00.362699    7511 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:00.363047    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:00:00.412883    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:00:00.412944    7511 delete.go:82] Unable to get host status for multinode-985000, assuming it has already been deleted: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:00.413021    7511 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-985000
	W1107 16:00:00.462469    7511 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-985000 returned with exit code 1
	I1107 16:00:00.462499    7511 kic.go:371] could not find the container multinode-985000 to remove it. will try anyways
	I1107 16:00:00.462581    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:00:00.512697    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	W1107 16:00:00.512741    7511 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:00.512821    7511 cli_runner.go:164] Run: docker exec --privileged -t multinode-985000 /bin/bash -c "sudo init 0"
	W1107 16:00:00.562379    7511 cli_runner.go:211] docker exec --privileged -t multinode-985000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1107 16:00:00.562405    7511 oci.go:650] error shutdown multinode-985000: docker exec --privileged -t multinode-985000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:01.562760    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:00:01.615194    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:00:01.615235    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:01.615245    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:00:01.615268    7511 retry.go:31] will retry after 318.245692ms: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:01.935901    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:00:01.991045    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:00:01.991088    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:01.991099    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:00:01.991122    7511 retry.go:31] will retry after 1.084044945s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:03.077568    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:00:03.132363    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:00:03.132403    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:03.132411    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:00:03.132437    7511 retry.go:31] will retry after 842.041493ms: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:03.976903    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:00:04.031181    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:00:04.031232    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:04.031244    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:00:04.031267    7511 retry.go:31] will retry after 1.053748977s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:05.085986    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:00:05.139179    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:00:05.139222    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:05.139241    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:00:05.139266    7511 retry.go:31] will retry after 2.469975668s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:07.610235    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:00:07.665021    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:00:07.665072    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:07.665084    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:00:07.665110    7511 retry.go:31] will retry after 2.751118177s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:10.418508    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:00:10.472730    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:00:10.472776    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:10.472790    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:00:10.472814    7511 retry.go:31] will retry after 5.888307946s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:16.362529    7511 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:00:16.463539    7511 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:00:16.463585    7511 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:00:16.463602    7511 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:00:16.463631    7511 oci.go:88] couldn't shut down multinode-985000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	 
	I1107 16:00:16.463701    7511 cli_runner.go:164] Run: docker rm -f -v multinode-985000
	I1107 16:00:16.514829    7511 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-985000
	W1107 16:00:16.564954    7511 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-985000 returned with exit code 1
	I1107 16:00:16.565066    7511 cli_runner.go:164] Run: docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:00:16.614853    7511 cli_runner.go:164] Run: docker network rm multinode-985000
	I1107 16:00:16.715472    7511 fix.go:114] Sleeping 1 second for extra luck!
	I1107 16:00:17.716278    7511 start.go:125] createHost starting for "" (driver="docker")
	I1107 16:00:17.738443    7511 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1107 16:00:17.738636    7511 start.go:159] libmachine.API.Create for "multinode-985000" (driver="docker")
	I1107 16:00:17.738680    7511 client.go:168] LocalClient.Create starting
	I1107 16:00:17.738896    7511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 16:00:17.738984    7511 main.go:141] libmachine: Decoding PEM data...
	I1107 16:00:17.739013    7511 main.go:141] libmachine: Parsing certificate...
	I1107 16:00:17.739097    7511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 16:00:17.739167    7511 main.go:141] libmachine: Decoding PEM data...
	I1107 16:00:17.739184    7511 main.go:141] libmachine: Parsing certificate...
	I1107 16:00:17.739895    7511 cli_runner.go:164] Run: docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 16:00:17.793934    7511 cli_runner.go:211] docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 16:00:17.794038    7511 network_create.go:281] running [docker network inspect multinode-985000] to gather additional debugging logs...
	I1107 16:00:17.794059    7511 cli_runner.go:164] Run: docker network inspect multinode-985000
	W1107 16:00:17.844160    7511 cli_runner.go:211] docker network inspect multinode-985000 returned with exit code 1
	I1107 16:00:17.844189    7511 network_create.go:284] error running [docker network inspect multinode-985000]: docker network inspect multinode-985000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-985000 not found
	I1107 16:00:17.844200    7511 network_create.go:286] output of [docker network inspect multinode-985000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-985000 not found
	
	** /stderr **
	I1107 16:00:17.844350    7511 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:00:17.896179    7511 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:00:17.897590    7511 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:00:17.897950    7511 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024c20a0}
	I1107 16:00:17.897967    7511 network_create.go:124] attempt to create docker network multinode-985000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1107 16:00:17.898031    7511 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-985000 multinode-985000
	W1107 16:00:17.948035    7511 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-985000 multinode-985000 returned with exit code 1
	W1107 16:00:17.948093    7511 network_create.go:149] failed to create docker network multinode-985000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-985000 multinode-985000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1107 16:00:17.948114    7511 network_create.go:116] failed to create docker network multinode-985000 192.168.67.0/24, will retry: subnet is taken
	I1107 16:00:17.949738    7511 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:00:17.950127    7511 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002317990}
	I1107 16:00:17.950138    7511 network_create.go:124] attempt to create docker network multinode-985000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1107 16:00:17.950209    7511 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-985000 multinode-985000
	I1107 16:00:18.035326    7511 network_create.go:108] docker network multinode-985000 192.168.76.0/24 created
	I1107 16:00:18.035382    7511 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-985000" container
	I1107 16:00:18.035498    7511 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 16:00:18.086342    7511 cli_runner.go:164] Run: docker volume create multinode-985000 --label name.minikube.sigs.k8s.io=multinode-985000 --label created_by.minikube.sigs.k8s.io=true
	I1107 16:00:18.135765    7511 oci.go:103] Successfully created a docker volume multinode-985000
	I1107 16:00:18.135917    7511 cli_runner.go:164] Run: docker run --rm --name multinode-985000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-985000 --entrypoint /usr/bin/test -v multinode-985000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 16:00:18.420192    7511 oci.go:107] Successfully prepared a docker volume multinode-985000
	I1107 16:00:18.420226    7511 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:00:18.420238    7511 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 16:00:18.420342    7511 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-985000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1107 16:06:17.735264    7511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:06:17.735461    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:17.788319    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 16:06:17.788423    7511 retry.go:31] will retry after 352.604098ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:18.141371    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:18.194504    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 16:06:18.194619    7511 retry.go:31] will retry after 244.00695ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:18.439343    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:18.491470    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 16:06:18.491591    7511 retry.go:31] will retry after 367.441953ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:18.861503    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:18.914599    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 16:06:18.914697    7511 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 16:06:18.914724    7511 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:18.914792    7511 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:06:18.914860    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:18.968702    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 16:06:18.968815    7511 retry.go:31] will retry after 233.435893ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:19.203346    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:19.258182    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 16:06:19.258278    7511 retry.go:31] will retry after 269.171194ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:19.528653    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:19.583035    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 16:06:19.583129    7511 retry.go:31] will retry after 673.250928ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:20.258802    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:20.312426    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 16:06:20.312532    7511 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 16:06:20.312553    7511 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:20.312566    7511 start.go:128] duration metric: createHost completed in 6m2.602060111s
	I1107 16:06:20.312639    7511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1107 16:06:20.312691    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:20.364262    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 16:06:20.364350    7511 retry.go:31] will retry after 300.509489ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:20.666676    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:20.719928    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 16:06:20.720018    7511 retry.go:31] will retry after 244.971809ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:20.965956    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:21.019012    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 16:06:21.019109    7511 retry.go:31] will retry after 551.292925ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:21.571207    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:21.623549    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 16:06:21.623662    7511 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 16:06:21.623676    7511 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:21.623744    7511 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1107 16:06:21.623807    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:21.675109    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 16:06:21.675195    7511 retry.go:31] will retry after 226.300912ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:21.903027    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:21.958169    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 16:06:21.958258    7511 retry.go:31] will retry after 479.045626ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:22.439779    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:22.492647    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	I1107 16:06:22.492746    7511 retry.go:31] will retry after 296.363661ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:22.791460    7511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000
	W1107 16:06:22.847359    7511 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000 returned with exit code 1
	W1107 16:06:22.847461    7511 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	W1107 16:06:22.847477    7511 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-985000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-985000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:22.847489    7511 fix.go:56] fixHost completed within 6m22.63832486s
	I1107 16:06:22.847496    7511 start.go:83] releasing machines lock for "multinode-985000", held for 6m22.638369189s
	W1107 16:06:22.847569    7511 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-985000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-985000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1107 16:06:22.868896    7511 out.go:177] 
	W1107 16:06:22.889950    7511 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1107 16:06:22.890006    7511 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1107 16:06:22.890071    7511 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1107 16:06:22.911897    7511 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-985000" : exit status 52
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-985000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985000
helpers_test.go:235: (dbg) docker inspect multinode-985000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-985000",
	        "Id": "b9c99544c4e323a5e73d5f139344b19d0f6ebc5e741bc3d1794831b36d0c45f4",
	        "Created": "2023-11-08T00:00:17.997390539Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-985000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 7 (107.633848ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 16:06:23.208287    7904 status.go:249] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (786.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 node delete m03: exit status 80 (201.163821ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-985000 node delete m03": exit status 80
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr: exit status 7 (107.737929ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 16:06:23.465431    7912 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:06:23.465657    7912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:06:23.465663    7912 out.go:309] Setting ErrFile to fd 2...
	I1107 16:06:23.465667    7912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:06:23.465850    7912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 16:06:23.466031    7912 out.go:303] Setting JSON to false
	I1107 16:06:23.466053    7912 mustload.go:65] Loading cluster: multinode-985000
	I1107 16:06:23.466096    7912 notify.go:220] Checking for updates...
	I1107 16:06:23.466383    7912 config.go:182] Loaded profile config "multinode-985000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 16:06:23.466394    7912 status.go:255] checking status of multinode-985000 ...
	I1107 16:06:23.466798    7912 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:23.517368    7912 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:06:23.517420    7912 status.go:330] multinode-985000 host status = "" (err=state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	)
	I1107 16:06:23.517451    7912 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1107 16:06:23.517474    7912 status.go:260] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	E1107 16:06:23.517483    7912 status.go:263] The "multinode-985000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985000
helpers_test.go:235: (dbg) docker inspect multinode-985000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-985000",
	        "Id": "b9c99544c4e323a5e73d5f139344b19d0f6ebc5e741bc3d1794831b36d0c45f4",
	        "Created": "2023-11-08T00:00:17.997390539Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-985000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 7 (107.838536ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 16:06:23.679701    7918 status.go:249] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (12.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 stop
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 stop: exit status 82 (12.263103241s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-985000"  ...
	* Stopping node "multinode-985000"  ...
	* Stopping node "multinode-985000"  ...
	* Stopping node "multinode-985000"  ...
	* Stopping node "multinode-985000"  ...
	* Stopping node "multinode-985000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-985000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-985000 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status: exit status 7 (107.605735ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 16:06:36.050689    7940 status.go:260] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	E1107 16:06:36.050701    7940 status.go:263] The "multinode-985000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr: exit status 7 (107.596636ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 16:06:36.106306    7944 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:06:36.106541    7944 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:06:36.106546    7944 out.go:309] Setting ErrFile to fd 2...
	I1107 16:06:36.106550    7944 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:06:36.106717    7944 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 16:06:36.106880    7944 out.go:303] Setting JSON to false
	I1107 16:06:36.106901    7944 mustload.go:65] Loading cluster: multinode-985000
	I1107 16:06:36.106930    7944 notify.go:220] Checking for updates...
	I1107 16:06:36.107175    7944 config.go:182] Loaded profile config "multinode-985000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 16:06:36.107189    7944 status.go:255] checking status of multinode-985000 ...
	I1107 16:06:36.107580    7944 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:36.158257    7944 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:06:36.158304    7944 status.go:330] multinode-985000 host status = "" (err=state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	)
	I1107 16:06:36.158325    7944 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1107 16:06:36.158342    7944 status.go:260] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	E1107 16:06:36.158349    7944 status.go:263] The "multinode-985000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr": multinode-985000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr": multinode-985000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985000
helpers_test.go:235: (dbg) docker inspect multinode-985000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-985000",
	        "Id": "b9c99544c4e323a5e73d5f139344b19d0f6ebc5e741bc3d1794831b36d0c45f4",
	        "Created": "2023-11-08T00:00:17.997390539Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-985000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 7 (107.671298ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 16:06:36.319462    7950 status.go:249] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (12.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (131.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-985000 --wait=true -v=8 --alsologtostderr --driver=docker 
E1107 16:08:43.797248    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-985000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (2m11.442925701s)

                                                
                                                
-- stdout --
	* [multinode-985000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-985000 in cluster multinode-985000
	* Pulling base image ...
	* docker "multinode-985000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 16:06:36.429656    7956 out.go:296] Setting OutFile to fd 1 ...
	I1107 16:06:36.429860    7956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:06:36.429865    7956 out.go:309] Setting ErrFile to fd 2...
	I1107 16:06:36.429870    7956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 16:06:36.430047    7956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 16:06:36.431487    7956 out.go:303] Setting JSON to false
	I1107 16:06:36.453741    7956 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5770,"bootTime":1699396226,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1107 16:06:36.453860    7956 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1107 16:06:36.475196    7956 out.go:177] * [multinode-985000] minikube v1.32.0 on Darwin 14.1
	I1107 16:06:36.517135    7956 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 16:06:36.517301    7956 notify.go:220] Checking for updates...
	I1107 16:06:36.539086    7956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	I1107 16:06:36.560388    7956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 16:06:36.581083    7956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 16:06:36.602154    7956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	I1107 16:06:36.623248    7956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 16:06:36.645031    7956 config.go:182] Loaded profile config "multinode-985000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 16:06:36.645828    7956 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 16:06:36.702920    7956 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.0 (126437)
	I1107 16:06:36.703048    7956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:06:36.802825    7956 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:108 SystemTime:2023-11-08 00:06:36.792958486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=
unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescriptio
n:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 16:06:36.845300    7956 out.go:177] * Using the docker driver based on existing profile
	I1107 16:06:36.866297    7956 start.go:298] selected driver: docker
	I1107 16:06:36.866348    7956 start.go:902] validating driver "docker" against &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-985000 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 16:06:36.866460    7956 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 16:06:36.866669    7956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 16:06:36.968000    7956 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:108 SystemTime:2023-11-08 00:06:36.957995804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=
unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescriptio
n:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 16:06:36.971083    7956 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1107 16:06:36.971151    7956 cni.go:84] Creating CNI manager for ""
	I1107 16:06:36.971162    7956 cni.go:136] 1 nodes found, recommending kindnet
	I1107 16:06:36.971171    7956 start_flags.go:323] config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-985000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 16:06:37.013270    7956 out.go:177] * Starting control plane node multinode-985000 in cluster multinode-985000
	I1107 16:06:37.035339    7956 cache.go:121] Beginning downloading kic base image for docker with docker
	I1107 16:06:37.057060    7956 out.go:177] * Pulling base image ...
	I1107 16:06:37.099344    7956 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:06:37.099420    7956 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1107 16:06:37.099458    7956 cache.go:56] Caching tarball of preloaded images
	I1107 16:06:37.099447    7956 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 16:06:37.099657    7956 preload.go:174] Found /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1107 16:06:37.099682    7956 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1107 16:06:37.099821    7956 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/multinode-985000/config.json ...
	I1107 16:06:37.151313    7956 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon, skipping pull
	I1107 16:06:37.151338    7956 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in daemon, skipping load
	I1107 16:06:37.151360    7956 cache.go:194] Successfully downloaded all kic artifacts
	I1107 16:06:37.151408    7956 start.go:365] acquiring machines lock for multinode-985000: {Name:mk708b1ebc8aeef69ffae97883f5d94723698aef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1107 16:06:37.151500    7956 start.go:369] acquired machines lock for "multinode-985000" in 71.797µs
	I1107 16:06:37.151522    7956 start.go:96] Skipping create...Using existing machine configuration
	I1107 16:06:37.151534    7956 fix.go:54] fixHost starting: 
	I1107 16:06:37.151761    7956 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:37.201611    7956 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:06:37.201661    7956 fix.go:102] recreateIfNeeded on multinode-985000: state= err=unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:37.201683    7956 fix.go:107] machineExists: false. err=machine does not exist
	I1107 16:06:37.223106    7956 out.go:177] * docker "multinode-985000" container is missing, will recreate.
	I1107 16:06:37.264916    7956 delete.go:124] DEMOLISHING multinode-985000 ...
	I1107 16:06:37.265128    7956 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:37.316111    7956 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	W1107 16:06:37.316158    7956 stop.go:75] unable to get state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:37.316179    7956 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:37.316524    7956 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:37.366167    7956 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:06:37.366225    7956 delete.go:82] Unable to get host status for multinode-985000, assuming it has already been deleted: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:37.366311    7956 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-985000
	W1107 16:06:37.416277    7956 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-985000 returned with exit code 1
	I1107 16:06:37.416309    7956 kic.go:371] could not find the container multinode-985000 to remove it. will try anyways
	I1107 16:06:37.416378    7956 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:37.466565    7956 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	W1107 16:06:37.466609    7956 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:37.466701    7956 cli_runner.go:164] Run: docker exec --privileged -t multinode-985000 /bin/bash -c "sudo init 0"
	W1107 16:06:37.517006    7956 cli_runner.go:211] docker exec --privileged -t multinode-985000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1107 16:06:37.517035    7956 oci.go:650] error shutdown multinode-985000: docker exec --privileged -t multinode-985000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:38.519383    7956 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:38.575384    7956 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:06:38.575440    7956 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:38.575451    7956 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:06:38.575490    7956 retry.go:31] will retry after 309.592253ms: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:38.886809    7956 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:38.941308    7956 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:06:38.941359    7956 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:38.941367    7956 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:06:38.941392    7956 retry.go:31] will retry after 1.041181657s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:39.985028    7956 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:40.037925    7956 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:06:40.037970    7956 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:40.037983    7956 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:06:40.038005    7956 retry.go:31] will retry after 600.562539ms: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:40.640786    7956 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:40.694568    7956 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:06:40.694615    7956 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:40.694622    7956 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:06:40.694645    7956 retry.go:31] will retry after 1.114393037s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:41.810132    7956 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:41.863917    7956 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:06:41.863963    7956 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:41.863972    7956 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:06:41.863994    7956 retry.go:31] will retry after 2.872746259s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:44.737122    7956 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:44.790903    7956 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:06:44.790944    7956 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:44.790952    7956 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:06:44.790978    7956 retry.go:31] will retry after 2.456874733s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:47.250236    7956 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:47.303390    7956 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:06:47.303432    7956 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:47.303439    7956 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:06:47.303466    7956 retry.go:31] will retry after 8.204824054s: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:55.508485    7956 cli_runner.go:164] Run: docker container inspect multinode-985000 --format={{.State.Status}}
	W1107 16:06:55.563323    7956 cli_runner.go:211] docker container inspect multinode-985000 --format={{.State.Status}} returned with exit code 1
	I1107 16:06:55.563379    7956 oci.go:662] temporary error verifying shutdown: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	I1107 16:06:55.563388    7956 oci.go:664] temporary error: container multinode-985000 status is  but expect it to be exited
	I1107 16:06:55.563417    7956 oci.go:88] couldn't shut down multinode-985000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000
	 
	I1107 16:06:55.563489    7956 cli_runner.go:164] Run: docker rm -f -v multinode-985000
	I1107 16:06:55.613462    7956 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-985000
	W1107 16:06:55.663467    7956 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-985000 returned with exit code 1
	I1107 16:06:55.663585    7956 cli_runner.go:164] Run: docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:06:55.714643    7956 cli_runner.go:164] Run: docker network rm multinode-985000
	I1107 16:06:55.809866    7956 fix.go:114] Sleeping 1 second for extra luck!
	I1107 16:06:56.812062    7956 start.go:125] createHost starting for "" (driver="docker")
	I1107 16:06:56.833965    7956 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1107 16:06:56.834128    7956 start.go:159] libmachine.API.Create for "multinode-985000" (driver="docker")
	I1107 16:06:56.834200    7956 client.go:168] LocalClient.Create starting
	I1107 16:06:56.834375    7956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/ca.pem
	I1107 16:06:56.834472    7956 main.go:141] libmachine: Decoding PEM data...
	I1107 16:06:56.834508    7956 main.go:141] libmachine: Parsing certificate...
	I1107 16:06:56.834607    7956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17585-1518/.minikube/certs/cert.pem
	I1107 16:06:56.834686    7956 main.go:141] libmachine: Decoding PEM data...
	I1107 16:06:56.834701    7956 main.go:141] libmachine: Parsing certificate...
	I1107 16:06:56.835496    7956 cli_runner.go:164] Run: docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1107 16:06:56.886657    7956 cli_runner.go:211] docker network inspect multinode-985000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1107 16:06:56.886741    7956 network_create.go:281] running [docker network inspect multinode-985000] to gather additional debugging logs...
	I1107 16:06:56.886759    7956 cli_runner.go:164] Run: docker network inspect multinode-985000
	W1107 16:06:56.936873    7956 cli_runner.go:211] docker network inspect multinode-985000 returned with exit code 1
	I1107 16:06:56.936908    7956 network_create.go:284] error running [docker network inspect multinode-985000]: docker network inspect multinode-985000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-985000 not found
	I1107 16:06:56.936921    7956 network_create.go:286] output of [docker network inspect multinode-985000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-985000 not found
	
	** /stderr **
	I1107 16:06:56.937040    7956 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1107 16:06:56.989092    7956 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1107 16:06:56.989498    7956 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023ece30}
	I1107 16:06:56.989513    7956 network_create.go:124] attempt to create docker network multinode-985000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1107 16:06:56.989582    7956 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-985000 multinode-985000
	I1107 16:06:57.076151    7956 network_create.go:108] docker network multinode-985000 192.168.58.0/24 created
	I1107 16:06:57.076189    7956 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-985000" container
	I1107 16:06:57.076311    7956 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1107 16:06:57.127981    7956 cli_runner.go:164] Run: docker volume create multinode-985000 --label name.minikube.sigs.k8s.io=multinode-985000 --label created_by.minikube.sigs.k8s.io=true
	I1107 16:06:57.177580    7956 oci.go:103] Successfully created a docker volume multinode-985000
	I1107 16:06:57.177702    7956 cli_runner.go:164] Run: docker run --rm --name multinode-985000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-985000 --entrypoint /usr/bin/test -v multinode-985000:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1107 16:06:57.481532    7956 oci.go:107] Successfully prepared a docker volume multinode-985000
	I1107 16:06:57.481580    7956 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 16:06:57.481592    7956 kic.go:194] Starting extracting preloaded images to volume ...
	I1107 16:06:57.481695    7956 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-985000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-985000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-985000
helpers_test.go:235: (dbg) docker inspect multinode-985000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-985000",
	        "Id": "65b4f23c59620ce9f2458b16af867ddce20d0091ac341e66ab6a53f8edbf4613",
	        "Created": "2023-11-08T00:06:57.036884732Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-985000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 7 (107.468259ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 16:08:47.982644    8069 status.go:249] status error: host: state: unknown state "multinode-985000": docker container inspect multinode-985000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-985000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (131.67s)

                                                
                                    
x
+
TestScheduledStopUnix (300.9s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-916000 --memory=2048 --driver=docker 
E1107 16:13:43.866078    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:14:06.718901    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 16:15:06.924681    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-916000 --memory=2048 --driver=docker : signal: killed (5m0.002915797s)

                                                
                                                
-- stdout --
	* [scheduled-stop-916000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-916000 in cluster scheduled-stop-916000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-916000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-916000 in cluster scheduled-stop-916000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-11-07 16:16:37.833693 -0800 PST m=+4576.735145562
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-916000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-916000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-916000",
	        "Id": "5ceb746ddb7e92e070612f33f58f3c2e0066dee00387e42521e75a2c79290795",
	        "Created": "2023-11-08T00:11:38.881626555Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-916000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-916000 -n scheduled-stop-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-916000 -n scheduled-stop-916000: exit status 7 (109.978049ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 16:16:37.998851    8588 status.go:249] status error: host: state: unknown state "scheduled-stop-916000": docker container inspect scheduled-stop-916000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-916000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-916000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-916000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-916000
--- FAIL: TestScheduledStopUnix (300.90s)

                                                
                                    
x
+
TestSkaffold (300.89s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe2505041985 version
skaffold_test.go:63: skaffold version: v2.8.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-374000 --memory=2600 --driver=docker 
E1107 16:18:43.862196    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:19:06.714369    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 16:20:29.773699    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-374000 --memory=2600 --driver=docker : signal: killed (4m57.758307203s)

                                                
                                                
-- stdout --
	* [skaffold-374000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-374000 in cluster skaffold-374000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-374000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-374000 in cluster skaffold-374000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestSkaffold FAILED at 2023-11-07 16:21:38.734652 -0800 PST m=+4877.639412751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-374000
helpers_test.go:235: (dbg) docker inspect skaffold-374000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-374000",
	        "Id": "8132fbcc934fd4359a701b4275d3917ddf04ff3a8195277055b8155c304e1773",
	        "Created": "2023-11-08T00:16:42.124378037Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-374000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-374000 -n skaffold-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-374000 -n skaffold-374000: exit status 7 (107.742859ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1107 16:21:38.894929    8732 status.go:249] status error: host: state: unknown state "skaffold-374000": docker container inspect skaffold-374000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-374000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-374000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-374000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-374000
--- FAIL: TestSkaffold (300.89s)

                                                
                                    
x
+
TestInsufficientStorage (300.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-626000 --memory=2048 --output=json --wait=true --driver=docker 
E1107 16:23:43.860072    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 16:24:06.712417    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-626000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.003225315s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"01fd2b02-3694-4704-9844-485fdb25e0ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-626000] minikube v1.32.0 on Darwin 14.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b733264b-04fe-42ed-bba6-fb3fe480df25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17585"}}
	{"specversion":"1.0","id":"24f8c801-1a25-4b45-82f9-af96b5c25486","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig"}}
	{"specversion":"1.0","id":"6e69b03a-dbef-4c0b-87d7-17c3e827ede2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"a0c3e308-c16e-4aeb-a6cd-de848dc56ff7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"43532d08-12fc-42e7-843d-b9267c3b2e88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube"}}
	{"specversion":"1.0","id":"cff76cf0-75ba-489b-b182-fdb2edc3d158","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"37b5e5bc-82be-4ea7-bdd3-396396354fbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a9fa4e69-043b-4c45-96bb-c93cfcfb1388","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a0b26cd7-b2df-40f2-a0de-912eeb0e34d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"460a2113-3913-4edf-94d0-1f1114f885ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"55597ae8-b91a-455e-8cfd-fbaafb0237e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-626000 in cluster insufficient-storage-626000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f24fd96c-b308-431c-976b-e2f3bc57cb15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3d72a9a-2f95-49e3-8db2-38a9fdb31f9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-626000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-626000 --output=json --layout=cluster: context deadline exceeded (754ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-626000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-626000
--- FAIL: TestInsufficientStorage (300.73s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (9.27s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17585
- KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2221179412/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
! Unable to update hyperkit driver: download: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.32.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.32.0/docker-machine-driver-hyperkit.sha256 Dst:/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2221179412/001/.minikube/bin/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x51c0ec0 0x51c0ec0 0x51c0ec0 0x51c0ec0 0x51c0ec0 0x51c0ec0 0x51c0ec0] Decompressors:map[bz2:0xc000486380 gz:0xc000486388 tar:0xc000486330 tar.bz2:0xc000486340 tar.gz:0xc000486350 tar.xz:0xc000486360 tar.zst:0xc000486370 tbz2:0xc000486340 tgz:0xc000486350 txz:0xc000486360 tzst:0xc000486370 xz:0xc000486390 zip:0xc0004863a0 zst:0xc000486398] Getters:map[file:0xc000beab90 http:0xc0006a5400 https:0xc0006a5450] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: E
rror downloading checksum file: bad response code: 404
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
driver_install_or_update_test.go:218: invalid driver version. expected: v1.32.0, got: v1.2.0
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (9.27s)

                                                
                                    

Test pass (140/184)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 20.31
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.35
10 TestDownloadOnly/v1.28.3/json-events 23.95
11 TestDownloadOnly/v1.28.3/preload-exists 0
14 TestDownloadOnly/v1.28.3/kubectl 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.3
16 TestDownloadOnly/DeleteAll 0.64
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.37
18 TestDownloadOnlyKic 1.94
19 TestBinaryMirror 1.58
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
25 TestAddons/Setup 152.69
29 TestAddons/parallel/InspektorGadget 11.17
30 TestAddons/parallel/MetricsServer 5.92
31 TestAddons/parallel/HelmTiller 14.04
33 TestAddons/parallel/CSI 77.6
34 TestAddons/parallel/Headlamp 13.48
35 TestAddons/parallel/CloudSpanner 5.68
36 TestAddons/parallel/LocalPath 54.5
37 TestAddons/parallel/NvidiaDevicePlugin 5.61
40 TestAddons/serial/GCPAuth/Namespaces 0.1
41 TestAddons/StoppedEnableDisable 11.71
49 TestHyperKitDriverInstallOrUpdate 6.41
52 TestErrorSpam/setup 21.44
53 TestErrorSpam/start 2.03
54 TestErrorSpam/status 1.16
55 TestErrorSpam/pause 1.64
56 TestErrorSpam/unpause 1.75
57 TestErrorSpam/stop 11.38
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 33.7
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 37.37
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.07
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.39
69 TestFunctional/serial/CacheCmd/cache/add_local 1.67
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.41
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
74 TestFunctional/serial/CacheCmd/cache/delete 0.16
75 TestFunctional/serial/MinikubeKubectlCmd 0.56
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.77
77 TestFunctional/serial/ExtraConfig 39.38
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 3.19
80 TestFunctional/serial/LogsFileCmd 3.01
81 TestFunctional/serial/InvalidService 4.76
83 TestFunctional/parallel/ConfigCmd 0.48
84 TestFunctional/parallel/DashboardCmd 15.07
85 TestFunctional/parallel/DryRun 1.35
86 TestFunctional/parallel/InternationalLanguage 0.73
87 TestFunctional/parallel/StatusCmd 1.23
92 TestFunctional/parallel/AddonsCmd 0.26
93 TestFunctional/parallel/PersistentVolumeClaim 27.87
95 TestFunctional/parallel/SSHCmd 0.74
96 TestFunctional/parallel/CpCmd 1.54
97 TestFunctional/parallel/MySQL 32.45
98 TestFunctional/parallel/FileSync 0.42
99 TestFunctional/parallel/CertSync 2.42
103 TestFunctional/parallel/NodeLabels 0.05
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
108 TestFunctional/parallel/Version/short 0.16
109 TestFunctional/parallel/Version/components 0.68
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
114 TestFunctional/parallel/ImageCommands/ImageBuild 2.66
115 TestFunctional/parallel/ImageCommands/Setup 2.05
116 TestFunctional/parallel/DockerEnv/bash 1.58
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.14
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.36
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.32
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.76
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.94
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.77
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.53
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.72
127 TestFunctional/parallel/ServiceCmd/DeployApp 16.2
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
130 TestFunctional/parallel/ServiceCmd/List 0.82
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.21
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.74
135 TestFunctional/parallel/ServiceCmd/HTTPS 15
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
142 TestFunctional/parallel/ServiceCmd/Format 15
143 TestFunctional/parallel/ServiceCmd/URL 15
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
145 TestFunctional/parallel/ProfileCmd/profile_list 0.51
146 TestFunctional/parallel/MountCmd/any-port 9.08
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.82
148 TestFunctional/parallel/MountCmd/specific-port 2.71
149 TestFunctional/parallel/MountCmd/VerifyCleanup 2.79
150 TestFunctional/delete_addon-resizer_images 0.14
151 TestFunctional/delete_my-image_image 0.05
152 TestFunctional/delete_minikube_cached_images 0.05
156 TestImageBuild/serial/Setup 21.14
157 TestImageBuild/serial/NormalBuild 1.81
158 TestImageBuild/serial/BuildWithBuildArg 0.95
159 TestImageBuild/serial/BuildWithDockerIgnore 0.74
160 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.75
170 TestJSONOutput/start/Command 35.88
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.55
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.59
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 10.85
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.74
195 TestKicCustomNetwork/create_custom_network 23.5
196 TestKicCustomNetwork/use_default_bridge_network 22.85
197 TestKicExistingNetwork 23.33
198 TestKicCustomSubnet 23.14
199 TestKicStaticIP 23.74
200 TestMainNoArgs 0.08
201 TestMinikubeProfile 48.73
204 TestMountStart/serial/StartWithMountFirst 7.15
205 TestMountStart/serial/VerifyMountFirst 0.38
206 TestMountStart/serial/StartWithMountSecond 7.54
225 TestPreload 168.89
246 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.06
x
+
TestDownloadOnly/v1.16.0/json-events (20.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-010000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-010000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (20.309380157s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (20.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-010000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-010000: exit status 85 (354.127934ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-010000 | jenkins | v1.32.0 | 07 Nov 23 15:00 PST |          |
	|         | -p download-only-010000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 15:00:20
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 15:00:20.997411    2091 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:00:20.997617    2091 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:00:20.997623    2091 out.go:309] Setting ErrFile to fd 2...
	I1107 15:00:20.997627    2091 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:00:20.997805    2091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	W1107 15:00:20.997911    2091 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17585-1518/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17585-1518/.minikube/config/config.json: no such file or directory
	I1107 15:00:20.999665    2091 out.go:303] Setting JSON to true
	I1107 15:00:21.023872    2091 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1795,"bootTime":1699396226,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1107 15:00:21.023967    2091 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1107 15:00:21.045803    2091 out.go:97] [download-only-010000] minikube v1.32.0 on Darwin 14.1
	I1107 15:00:21.068474    2091 out.go:169] MINIKUBE_LOCATION=17585
	I1107 15:00:21.046029    2091 notify.go:220] Checking for updates...
	W1107 15:00:21.046075    2091 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball: no such file or directory
	I1107 15:00:21.111507    2091 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	I1107 15:00:21.132444    2091 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 15:00:21.153478    2091 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 15:00:21.174591    2091 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	W1107 15:00:21.216764    2091 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 15:00:21.217237    2091 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 15:00:21.277866    2091 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.0 (126437)
	I1107 15:00:21.277995    2091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 15:00:21.382023    2091 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:false NGoroutines:50 SystemTime:2023-11-07 23:00:21.369216614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 15:00:21.403680    2091 out.go:97] Using the docker driver based on user configuration
	I1107 15:00:21.403723    2091 start.go:298] selected driver: docker
	I1107 15:00:21.403738    2091 start.go:902] validating driver "docker" against <nil>
	I1107 15:00:21.403930    2091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 15:00:21.504404    2091 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:false NGoroutines:50 SystemTime:2023-11-07 23:00:21.494598784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 15:00:21.504575    2091 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1107 15:00:21.509278    2091 start_flags.go:394] Using suggested 5882MB memory alloc based on sys=32768MB, container=5930MB
	I1107 15:00:21.509432    2091 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1107 15:00:21.530568    2091 out.go:169] Using Docker Desktop driver with root privileges
	I1107 15:00:21.551432    2091 cni.go:84] Creating CNI manager for ""
	I1107 15:00:21.551472    2091 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1107 15:00:21.551489    2091 start_flags.go:323] config:
	{Name:download-only-010000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:5882 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-010000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 15:00:21.573637    2091 out.go:97] Starting control plane node download-only-010000 in cluster download-only-010000
	I1107 15:00:21.573681    2091 cache.go:121] Beginning downloading kic base image for docker with docker
	I1107 15:00:21.595457    2091 out.go:97] Pulling base image ...
	I1107 15:00:21.595503    2091 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 15:00:21.595575    2091 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 15:00:21.645192    2091 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 15:00:21.645423    2091 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1107 15:00:21.645567    2091 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 15:00:21.655787    2091 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1107 15:00:21.655809    2091 cache.go:56] Caching tarball of preloaded images
	I1107 15:00:21.655999    2091 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 15:00:21.677683    2091 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1107 15:00:21.677710    2091 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 15:00:21.762978    2091 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1107 15:00:30.581187    2091 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1107 15:00:32.449910    2091 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 15:00:32.450078    2091 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1107 15:00:32.992293    2091 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1107 15:00:32.992634    2091 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/download-only-010000/config.json ...
	I1107 15:00:32.992656    2091 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/download-only-010000/config.json: {Name:mk3faea1b4d8a0f8b2ae3b515934a789b6cc6e8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1107 15:00:32.992963    2091 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1107 15:00:32.993284    2091 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-010000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (23.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-010000 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-010000 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=docker : (23.949174389s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (23.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
--- PASS: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-010000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-010000: exit status 85 (296.836041ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-010000 | jenkins | v1.32.0 | 07 Nov 23 15:00 PST |          |
	|         | -p download-only-010000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-010000 | jenkins | v1.32.0 | 07 Nov 23 15:00 PST |          |
	|         | -p download-only-010000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/07 15:00:41
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1107 15:00:41.663436    2130 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:00:41.663716    2130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:00:41.663722    2130 out.go:309] Setting ErrFile to fd 2...
	I1107 15:00:41.663726    2130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:00:41.663918    2130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	W1107 15:00:41.664016    2130 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17585-1518/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17585-1518/.minikube/config/config.json: no such file or directory
	I1107 15:00:41.665230    2130 out.go:303] Setting JSON to true
	I1107 15:00:41.687095    2130 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1815,"bootTime":1699396226,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1107 15:00:41.687188    2130 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1107 15:00:41.708906    2130 out.go:97] [download-only-010000] minikube v1.32.0 on Darwin 14.1
	I1107 15:00:41.730597    2130 out.go:169] MINIKUBE_LOCATION=17585
	I1107 15:00:41.709133    2130 notify.go:220] Checking for updates...
	I1107 15:00:41.773451    2130 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	I1107 15:00:41.794822    2130 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 15:00:41.816674    2130 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 15:00:41.837444    2130 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	W1107 15:00:41.879426    2130 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1107 15:00:41.880251    2130 config.go:182] Loaded profile config "download-only-010000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1107 15:00:41.880328    2130 start.go:810] api.Load failed for download-only-010000: filestore "download-only-010000": Docker machine "download-only-010000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 15:00:41.880488    2130 driver.go:378] Setting default libvirt URI to qemu:///system
	W1107 15:00:41.880526    2130 start.go:810] api.Load failed for download-only-010000: filestore "download-only-010000": Docker machine "download-only-010000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1107 15:00:41.941598    2130 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.0 (126437)
	I1107 15:00:41.941716    2130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 15:00:42.042830    2130 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:false NGoroutines:50 SystemTime:2023-11-07 23:00:42.033929664 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 15:00:42.064166    2130 out.go:97] Using the docker driver based on existing profile
	I1107 15:00:42.064184    2130 start.go:298] selected driver: docker
	I1107 15:00:42.064193    2130 start.go:902] validating driver "docker" against &{Name:download-only-010000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:5882 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-010000 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 15:00:42.064403    2130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 15:00:42.163495    2130 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:false NGoroutines:50 SystemTime:2023-11-07 23:00:42.153319918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 15:00:42.166661    2130 cni.go:84] Creating CNI manager for ""
	I1107 15:00:42.166685    2130 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1107 15:00:42.166697    2130 start_flags.go:323] config:
	{Name:download-only-010000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:5882 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-010000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 15:00:42.187945    2130 out.go:97] Starting control plane node download-only-010000 in cluster download-only-010000
	I1107 15:00:42.188009    2130 cache.go:121] Beginning downloading kic base image for docker with docker
	I1107 15:00:42.208801    2130 out.go:97] Pulling base image ...
	I1107 15:00:42.208917    2130 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 15:00:42.209016    2130 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1107 15:00:42.259857    2130 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1107 15:00:42.260187    2130 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1107 15:00:42.260204    2130 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory, skipping pull
	I1107 15:00:42.260210    2130 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 exists in cache, skipping pull
	I1107 15:00:42.260220    2130 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1107 15:00:42.263816    2130 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1107 15:00:42.263829    2130 cache.go:56] Caching tarball of preloaded images
	I1107 15:00:42.263982    2130 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 15:00:42.284822    2130 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1107 15:00:42.284849    2130 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1107 15:00:42.371371    2130 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4?checksum=md5:82104bbf889ff8b69d5c141ce86c05ac -> /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1107 15:00:48.238381    2130 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1107 15:00:48.238594    2130 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1107 15:00:48.856121    2130 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1107 15:00:48.856199    2130 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/download-only-010000/config.json ...
	I1107 15:00:48.856583    2130 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1107 15:00:48.856860    2130 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17585-1518/.minikube/cache/darwin/amd64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-010000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.64s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-010000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.94s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-308000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-308000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-308000
--- PASS: TestDownloadOnlyKic (1.94s)

                                                
                                    
x
+
TestBinaryMirror (1.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-570000 --alsologtostderr --binary-mirror http://127.0.0.1:49339 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-570000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-570000
--- PASS: TestBinaryMirror (1.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-533000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-533000: exit status 85 (208.414826ms)

                                                
                                                
-- stdout --
	* Profile "addons-533000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-533000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-533000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-533000: exit status 85 (187.449571ms)

                                                
                                                
-- stdout --
	* Profile "addons-533000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-533000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (152.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-533000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-533000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m32.691692169s)
--- PASS: TestAddons/Setup (152.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.17s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-v249d" [8f72ff69-2255-4ee5-b3bd-2949d99847e5] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01055437s
addons_test.go:840: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-533000
addons_test.go:840: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-533000: (6.158432264s)
--- PASS: TestAddons/parallel/InspektorGadget (11.17s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 4.041257ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-zkx8s" [07880880-9a2d-4eaf-8ca4-aa282fc28b18] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014565036s
addons_test.go:414: (dbg) Run:  kubectl --context addons-533000 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-darwin-amd64 -p addons-533000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.92s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.04s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.71275ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-7gk9j" [054e452e-d47d-466d-bf33-559930c792be] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01337846s
addons_test.go:472: (dbg) Run:  kubectl --context addons-533000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-533000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.249785435s)
addons_test.go:477: kubectl --context addons-533000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:472: (dbg) Run:  kubectl --context addons-533000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-533000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.416649934s)
addons_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p addons-533000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.04s)

                                                
                                    
x
+
TestAddons/parallel/CSI (77.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 14.1842ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-533000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-533000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [259a7a02-407f-4a89-b9c2-7535d13e69b0] Pending
helpers_test.go:344: "task-pv-pod" [259a7a02-407f-4a89-b9c2-7535d13e69b0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [259a7a02-407f-4a89-b9c2-7535d13e69b0] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.011891899s
addons_test.go:583: (dbg) Run:  kubectl --context addons-533000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-533000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-533000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-533000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-533000 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-533000 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-533000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-533000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0911f039-b6ed-44b7-b442-f8adc1d536d6] Pending
helpers_test.go:344: "task-pv-pod-restore" [0911f039-b6ed-44b7-b442-f8adc1d536d6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0911f039-b6ed-44b7-b442-f8adc1d536d6] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.010912462s
addons_test.go:625: (dbg) Run:  kubectl --context addons-533000 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-533000 delete pod task-pv-pod-restore: (1.008227071s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-533000 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-533000 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-darwin-amd64 -p addons-533000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-darwin-amd64 -p addons-533000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.897840266s)
addons_test.go:641: (dbg) Run:  out/minikube-darwin-amd64 -p addons-533000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (77.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-533000 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-533000 --alsologtostderr -v=1: (1.468712067s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-m6654" [46ae844f-6622-4b26-adcf-3ae4a8f6fe6c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-m6654" [46ae844f-6622-4b26-adcf-3ae4a8f6fe6c] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.012817608s
--- PASS: TestAddons/parallel/Headlamp (13.48s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-5jj99" [475a51f9-ac72-4d69-bf2a-9fb3df3449a1] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01174835s
addons_test.go:859: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-533000
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-533000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-533000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-533000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ca4d2dc9-1bff-49b6-a7cc-b7d6f285f838] Pending
helpers_test.go:344: "test-local-path" [ca4d2dc9-1bff-49b6-a7cc-b7d6f285f838] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ca4d2dc9-1bff-49b6-a7cc-b7d6f285f838] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ca4d2dc9-1bff-49b6-a7cc-b7d6f285f838] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.010020834s
addons_test.go:890: (dbg) Run:  kubectl --context addons-533000 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-darwin-amd64 -p addons-533000 ssh "cat /opt/local-path-provisioner/pvc-ab86c613-8a39-4c96-946c-0e0fff1b2be0_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-533000 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-533000 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-darwin-amd64 -p addons-533000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-darwin-amd64 -p addons-533000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.445535273s)
--- PASS: TestAddons/parallel/LocalPath (54.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9ntbk" [cf450583-c9f9-4d9b-99dd-5b57dbe2333f] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.012856309s
addons_test.go:954: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-533000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-533000 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-533000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-533000
addons_test.go:171: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-533000: (10.996350508s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-533000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-533000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-533000
--- PASS: TestAddons/StoppedEnableDisable (11.71s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.41s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.41s)

                                                
                                    
x
+
TestErrorSpam/setup (21.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-848000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-848000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 --driver=docker : (21.441430041s)
--- PASS: TestErrorSpam/setup (21.44s)

                                                
                                    
x
+
TestErrorSpam/start (2.03s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 start --dry-run
--- PASS: TestErrorSpam/start (2.03s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (11.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 stop: (10.754161655s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-848000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-848000 stop
--- PASS: TestErrorSpam/stop (11.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17585-1518/.minikube/files/etc/test/nested/copy/2089/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (33.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-980000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-980000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (33.695917476s)
--- PASS: TestFunctional/serial/StartWithProxy (33.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-980000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-980000 --alsologtostderr -v=8: (37.369104526s)
functional_test.go:659: soft start took 37.369571029s for "functional-980000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-980000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 cache add registry.k8s.io/pause:3.1: (1.139263664s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 cache add registry.k8s.io/pause:3.3: (1.180673895s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 cache add registry.k8s.io/pause:latest: (1.073815727s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-980000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2892356657/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 cache add minikube-local-cache-test:functional-980000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 cache add minikube-local-cache-test:functional-980000: (1.135158985s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 cache delete minikube-local-cache-test:functional-980000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-980000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (378.319917ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 kubectl -- --context functional-980000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-980000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.77s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-980000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1107 15:08:43.655490    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:08:43.663091    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:08:43.675229    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:08:43.695887    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:08:43.738104    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:08:43.818797    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:08:43.980936    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:08:44.301186    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:08:44.941646    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:08:46.221866    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
E1107 15:08:48.782139    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-980000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.37971289s)
functional_test.go:757: restart took 39.379861271s for "functional-980000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-980000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 logs: (3.189678586s)
--- PASS: TestFunctional/serial/LogsCmd (3.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1223528819/001/logs.txt
E1107 15:08:53.902357    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1223528819/001/logs.txt: (3.013581032s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.76s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-980000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-980000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-980000: exit status 115 (536.556762ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32019 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-980000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-980000 delete -f testdata/invalidsvc.yaml: (1.056559959s)
--- PASS: TestFunctional/serial/InvalidService (4.76s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 config get cpus: exit status 14 (57.527016ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 config get cpus: exit status 14 (56.289807ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-980000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-980000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4301: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-980000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-980000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (600.991777ms)

                                                
                                                
-- stdout --
	* [functional-980000] minikube v1.32.0 on Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 15:10:28.337383    4198 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:10:28.337971    4198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:10:28.337981    4198 out.go:309] Setting ErrFile to fd 2...
	I1107 15:10:28.337989    4198 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:10:28.338340    4198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 15:10:28.340568    4198 out.go:303] Setting JSON to false
	I1107 15:10:28.362826    4198 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2402,"bootTime":1699396226,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1107 15:10:28.362944    4198 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1107 15:10:28.384359    4198 out.go:177] * [functional-980000] minikube v1.32.0 on Darwin 14.1
	I1107 15:10:28.405605    4198 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 15:10:28.405648    4198 notify.go:220] Checking for updates...
	I1107 15:10:28.426774    4198 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	I1107 15:10:28.448571    4198 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 15:10:28.469540    4198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 15:10:28.492458    4198 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	I1107 15:10:28.513686    4198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 15:10:28.535292    4198 config.go:182] Loaded profile config "functional-980000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 15:10:28.536065    4198 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 15:10:28.592217    4198 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.0 (126437)
	I1107 15:10:28.592346    4198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 15:10:28.699970    4198 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:59 SystemTime:2023-11-07 23:10:28.689845155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 15:10:28.742408    4198 out.go:177] * Using the docker driver based on existing profile
	I1107 15:10:28.763510    4198 start.go:298] selected driver: docker
	I1107 15:10:28.763533    4198 start.go:902] validating driver "docker" against &{Name:functional-980000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-980000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 15:10:28.763650    4198 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 15:10:28.788569    4198 out.go:177] 
	W1107 15:10:28.809371    4198 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1107 15:10:28.830829    4198 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-980000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-980000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-980000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (730.225431ms)

                                                
                                                
-- stdout --
	* [functional-980000] minikube v1.32.0 sur Darwin 14.1
	  - MINIKUBE_LOCATION=17585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1107 15:10:29.668495    4262 out.go:296] Setting OutFile to fd 1 ...
	I1107 15:10:29.668688    4262 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:10:29.668694    4262 out.go:309] Setting ErrFile to fd 2...
	I1107 15:10:29.668698    4262 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1107 15:10:29.668877    4262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
	I1107 15:10:29.670369    4262 out.go:303] Setting JSON to false
	I1107 15:10:29.693436    4262 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2403,"bootTime":1699396226,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1107 15:10:29.693543    4262 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1107 15:10:29.715178    4262 out.go:177] * [functional-980000] minikube v1.32.0 sur Darwin 14.1
	I1107 15:10:29.778285    4262 out.go:177]   - MINIKUBE_LOCATION=17585
	I1107 15:10:29.757315    4262 notify.go:220] Checking for updates...
	I1107 15:10:29.820358    4262 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
	I1107 15:10:29.862059    4262 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1107 15:10:29.904305    4262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1107 15:10:29.950773    4262 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube
	I1107 15:10:29.992583    4262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1107 15:10:30.014323    4262 config.go:182] Loaded profile config "functional-980000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1107 15:10:30.014822    4262 driver.go:378] Setting default libvirt URI to qemu:///system
	I1107 15:10:30.073294    4262 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.0 (126437)
	I1107 15:10:30.073434    4262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1107 15:10:30.182878    4262 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:59 SystemTime:2023-11-07 23:10:30.172721581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218715136 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-e2cce99df426 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1107 15:10:30.204547    4262 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1107 15:10:30.240997    4262 start.go:298] selected driver: docker
	I1107 15:10:30.241015    4262 start.go:902] validating driver "docker" against &{Name:functional-980000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-980000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1107 15:10:30.241100    4262 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1107 15:10:30.265234    4262 out.go:177] 
	W1107 15:10:30.286183    4262 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1107 15:10:30.306889    4262 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [97f9f252-7e98-4cce-816b-57686e144278] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012987734s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-980000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-980000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-980000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-980000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [353b650a-6565-4af9-85c2-4c94acd75640] Pending
helpers_test.go:344: "sp-pod" [353b650a-6565-4af9-85c2-4c94acd75640] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [353b650a-6565-4af9-85c2-4c94acd75640] Running
E1107 15:10:05.583316    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.012124701s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-980000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-980000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-980000 delete -f testdata/storage-provisioner/pod.yaml: (1.102946954s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-980000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fc9fb027-7b89-4e9c-9848-ae190037e33e] Pending
helpers_test.go:344: "sp-pod" [fc9fb027-7b89-4e9c-9848-ae190037e33e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fc9fb027-7b89-4e9c-9848-ae190037e33e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.044954467s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-980000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh -n functional-980000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 cp functional-980000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd3438906182/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh -n functional-980000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-980000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-ntvtm" [5a24937d-9281-4e49-a31f-1c48c172ad56] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-ntvtm" [5a24937d-9281-4e49-a31f-1c48c172ad56] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.063901225s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-980000 exec mysql-859648c796-ntvtm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-980000 exec mysql-859648c796-ntvtm -- mysql -ppassword -e "show databases;": exit status 1 (125.729322ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-980000 exec mysql-859648c796-ntvtm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-980000 exec mysql-859648c796-ntvtm -- mysql -ppassword -e "show databases;": exit status 1 (115.42517ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-980000 exec mysql-859648c796-ntvtm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2089/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "sudo cat /etc/test/nested/copy/2089/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2089.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "sudo cat /etc/ssl/certs/2089.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2089.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "sudo cat /usr/share/ca-certificates/2089.pem"
E1107 15:09:04.142399    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/20892.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "sudo cat /etc/ssl/certs/20892.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/20892.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "sudo cat /usr/share/ca-certificates/20892.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-980000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 ssh "sudo systemctl is-active crio": exit status 1 (396.713577ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-980000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-980000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-980000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-980000 image ls --format short --alsologtostderr:
I1107 15:10:43.426569    4531 out.go:296] Setting OutFile to fd 1 ...
I1107 15:10:43.426803    4531 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 15:10:43.426808    4531 out.go:309] Setting ErrFile to fd 2...
I1107 15:10:43.426813    4531 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 15:10:43.427002    4531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
I1107 15:10:43.427651    4531 config.go:182] Loaded profile config "functional-980000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 15:10:43.427748    4531 config.go:182] Loaded profile config "functional-980000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 15:10:43.428144    4531 cli_runner.go:164] Run: docker container inspect functional-980000 --format={{.State.Status}}
I1107 15:10:43.482023    4531 ssh_runner.go:195] Run: systemctl --version
I1107 15:10:43.482104    4531 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-980000
I1107 15:10:43.531994    4531 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49982 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/functional-980000/id_rsa Username:docker}
I1107 15:10:43.613905    4531 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-980000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/mysql                     | 5.7               | 547b3c3c15a96 | 501MB  |
| registry.k8s.io/kube-apiserver              | v1.28.3           | 5374347291230 | 126MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.3           | 10baa1ca17068 | 122MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | c20060033e06f | 187MB  |
| docker.io/library/nginx                     | alpine            | b135667c98980 | 47.7MB |
| registry.k8s.io/kube-proxy                  | v1.28.3           | bfc896cf80fba | 73.1MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| docker.io/library/minikube-local-cache-test | functional-980000 | e06a6f7613d2f | 30B    |
| registry.k8s.io/kube-scheduler              | v1.28.3           | 6d1b4fd1b182d | 60.1MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-980000 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-980000 image ls --format table --alsologtostderr:
I1107 15:10:45.764368    4566 out.go:296] Setting OutFile to fd 1 ...
I1107 15:10:45.764649    4566 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 15:10:45.764654    4566 out.go:309] Setting ErrFile to fd 2...
I1107 15:10:45.764658    4566 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 15:10:45.764842    4566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
I1107 15:10:45.765468    4566 config.go:182] Loaded profile config "functional-980000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 15:10:45.765559    4566 config.go:182] Loaded profile config "functional-980000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 15:10:45.765971    4566 cli_runner.go:164] Run: docker container inspect functional-980000 --format={{.State.Status}}
I1107 15:10:45.816330    4566 ssh_runner.go:195] Run: systemctl --version
I1107 15:10:45.816395    4566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-980000
I1107 15:10:45.866447    4566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49982 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/functional-980000/id_rsa Username:docker}
I1107 15:10:45.947009    4566 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-980000 image ls --format json --alsologtostderr:
[{"id":"e06a6f7613d2fd0b5292bd83e1a8ea3f4d8ef71fdc998e9f81b39976e8504022","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-980000"],"size":"30"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"122000000"},{"id"
:"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-980000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"c20060033e06f882b0fbe2db7d974d72e0887a3be5e
554efdb0dcf8d53512647","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47700000"},{"id":"547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"60100000"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"73100000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size"
:"240000"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"126000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-980000 image ls --format json --alsologtostderr:
I1107 15:10:45.474224    4560 out.go:296] Setting OutFile to fd 1 ...
I1107 15:10:45.474442    4560 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 15:10:45.474449    4560 out.go:309] Setting ErrFile to fd 2...
I1107 15:10:45.474453    4560 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 15:10:45.474647    4560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
I1107 15:10:45.475238    4560 config.go:182] Loaded profile config "functional-980000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 15:10:45.475331    4560 config.go:182] Loaded profile config "functional-980000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 15:10:45.475719    4560 cli_runner.go:164] Run: docker container inspect functional-980000 --format={{.State.Status}}
I1107 15:10:45.525740    4560 ssh_runner.go:195] Run: systemctl --version
I1107 15:10:45.525813    4560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-980000
I1107 15:10:45.575845    4560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49982 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/functional-980000/id_rsa Username:docker}
I1107 15:10:45.658382    4560 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-980000 image ls --format yaml --alsologtostderr:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-980000
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: e06a6f7613d2fd0b5292bd83e1a8ea3f4d8ef71fdc998e9f81b39976e8504022
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-980000
size: "30"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47700000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "126000000"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "122000000"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "73100000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 547b3c3c15a9698ee368530b251e6baa66807c64742355e6724ba59b4d3ec8a6
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "60100000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-980000 image ls --format yaml --alsologtostderr:
I1107 15:10:43.721892    4537 out.go:296] Setting OutFile to fd 1 ...
I1107 15:10:43.722094    4537 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 15:10:43.722099    4537 out.go:309] Setting ErrFile to fd 2...
I1107 15:10:43.722103    4537 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 15:10:43.722279    4537 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
I1107 15:10:43.723018    4537 config.go:182] Loaded profile config "functional-980000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 15:10:43.723109    4537 config.go:182] Loaded profile config "functional-980000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 15:10:43.723505    4537 cli_runner.go:164] Run: docker container inspect functional-980000 --format={{.State.Status}}
I1107 15:10:43.774449    4537 ssh_runner.go:195] Run: systemctl --version
I1107 15:10:43.774520    4537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-980000
I1107 15:10:43.825335    4537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49982 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/functional-980000/id_rsa Username:docker}
I1107 15:10:43.907736    4537 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 ssh pgrep buildkitd: exit status 1 (349.653887ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image build -t localhost/my-image:functional-980000 testdata/build --alsologtostderr
2023/11/07 15:10:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 image build -t localhost/my-image:functional-980000 testdata/build --alsologtostderr: (2.026728736s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-980000 image build -t localhost/my-image:functional-980000 testdata/build --alsologtostderr:
I1107 15:10:44.359015    4553 out.go:296] Setting OutFile to fd 1 ...
I1107 15:10:44.359290    4553 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 15:10:44.359296    4553 out.go:309] Setting ErrFile to fd 2...
I1107 15:10:44.359300    4553 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 15:10:44.359480    4553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17585-1518/.minikube/bin
I1107 15:10:44.360073    4553 config.go:182] Loaded profile config "functional-980000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 15:10:44.360687    4553 config.go:182] Loaded profile config "functional-980000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1107 15:10:44.361097    4553 cli_runner.go:164] Run: docker container inspect functional-980000 --format={{.State.Status}}
I1107 15:10:44.411650    4553 ssh_runner.go:195] Run: systemctl --version
I1107 15:10:44.411728    4553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-980000
I1107 15:10:44.462670    4553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49982 SSHKeyPath:/Users/jenkins/minikube-integration/17585-1518/.minikube/machines/functional-980000/id_rsa Username:docker}
I1107 15:10:44.545954    4553 build_images.go:151] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.693308782.tar
I1107 15:10:44.546026    4553 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1107 15:10:44.554782    4553 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.693308782.tar
I1107 15:10:44.558849    4553 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.693308782.tar: stat -c "%s %y" /var/lib/minikube/build/build.693308782.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.693308782.tar': No such file or directory
I1107 15:10:44.558880    4553 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.693308782.tar --> /var/lib/minikube/build/build.693308782.tar (3072 bytes)
I1107 15:10:44.580206    4553 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.693308782
I1107 15:10:44.588902    4553 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.693308782 -xf /var/lib/minikube/build/build.693308782.tar
I1107 15:10:44.598068    4553 docker.go:346] Building image: /var/lib/minikube/build/build.693308782
I1107 15:10:44.598154    4553 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-980000 /var/lib/minikube/build/build.693308782
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.9s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e6a02df0b7f60e3e4a9a3d19ddc0946e4edcbb81f0a50a81c36d915edf22f839 done
#8 naming to localhost/my-image:functional-980000 done
#8 DONE 0.0s
I1107 15:10:46.227065    4553 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-980000 /var/lib/minikube/build/build.693308782: (1.628924214s)
I1107 15:10:46.227137    4553 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.693308782
I1107 15:10:46.236419    4553 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.693308782.tar
I1107 15:10:46.245653    4553 build_images.go:207] Built localhost/my-image:functional-980000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.693308782.tar
I1107 15:10:46.245680    4553 build_images.go:123] succeeded building to: functional-980000
I1107 15:10:46.245685    4553 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.981157261s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-980000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-980000 docker-env) && out/minikube-darwin-amd64 status -p functional-980000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-980000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image load --daemon gcr.io/google-containers/addon-resizer:functional-980000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 image load --daemon gcr.io/google-containers/addon-resizer:functional-980000 --alsologtostderr: (3.848261508s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image load --daemon gcr.io/google-containers/addon-resizer:functional-980000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 image load --daemon gcr.io/google-containers/addon-resizer:functional-980000 --alsologtostderr: (1.984333715s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.357580969s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-980000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image load --daemon gcr.io/google-containers/addon-resizer:functional-980000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 image load --daemon gcr.io/google-containers/addon-resizer:functional-980000 --alsologtostderr: (3.989298824s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image save gcr.io/google-containers/addon-resizer:functional-980000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 image save gcr.io/google-containers/addon-resizer:functional-980000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.935673811s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image rm gcr.io/google-containers/addon-resizer:functional-980000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.225701322s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-980000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 image save --daemon gcr.io/google-containers/addon-resizer:functional-980000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-980000 image save --daemon gcr.io/google-containers/addon-resizer:functional-980000 --alsologtostderr: (1.604267982s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-980000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-980000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-980000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-l7kjh" [14a12162-e158-4c2b-8521-3a9f76760298] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E1107 15:09:24.623673    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/addons-533000/client.crt: no such file or directory
helpers_test.go:344: "hello-node-d7447cc7f-l7kjh" [14a12162-e158-4c2b-8521-3a9f76760298] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.06573486s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-980000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-980000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-980000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-980000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3966: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-980000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-980000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [57b1a360-d277-401b-b30a-0bd5b5e44674] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [57b1a360-d277-401b-b30a-0bd5b5e44674] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.01627316s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 service list -o json
functional_test.go:1493: Took "735.012858ms" to run "out/minikube-darwin-amd64 -p functional-980000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 service --namespace=default --https --url hello-node: signal: killed (15.00181725s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50238

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:50238
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-980000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-980000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4010: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 service hello-node --url --format={{.IP}}: signal: killed (15.002815452s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 service hello-node --url: signal: killed (15.002127239s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50299

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:50299
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "430.83334ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "82.298382ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-980000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3099125028/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699398627021895000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3099125028/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699398627021895000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3099125028/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699398627021895000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3099125028/001/test-1699398627021895000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (392.264762ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (469.016637ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  7 23:10 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  7 23:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  7 23:10 test-1699398627021895000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh cat /mount-9p/test-1699398627021895000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-980000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [62ff0cc3-5955-4996-82bc-06626045eebb] Pending
helpers_test.go:344: "busybox-mount" [62ff0cc3-5955-4996-82bc-06626045eebb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [62ff0cc3-5955-4996-82bc-06626045eebb] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [62ff0cc3-5955-4996-82bc-06626045eebb] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.011669304s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-980000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-980000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3099125028/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "720.906091ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "100.846536ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-980000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port727177092/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (457.225621ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-980000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port727177092/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 ssh "sudo umount -f /mount-9p": exit status 1 (377.856843ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-980000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-980000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port727177092/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-980000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1602569897/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-980000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1602569897/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-980000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1602569897/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T" /mount1: exit status 1 (561.700469ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-980000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-980000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-980000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1602569897/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-980000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1602569897/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-980000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1602569897/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.79s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-980000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-980000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-980000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-806000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-806000 --driver=docker : (21.135484814s)
--- PASS: TestImageBuild/serial/Setup (21.14s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-806000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-806000: (1.814565026s)
--- PASS: TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-806000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-806000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.74s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-806000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (35.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-783000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E1107 15:19:06.576693    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
E1107 15:19:34.269679    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-783000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (35.88427315s)
--- PASS: TestJSONOutput/start/Command (35.88s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-783000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-783000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-783000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-783000 --output=json --user=testUser: (10.851537434s)
--- PASS: TestJSONOutput/stop/Command (10.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.74s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-534000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-534000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (362.872176ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"40a4c38c-d440-454e-87e7-f59eda716688","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-534000] minikube v1.32.0 on Darwin 14.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c808acf7-a3ed-4ebc-afd2-7315abe5f76a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17585"}}
	{"specversion":"1.0","id":"59ed774d-d430-4440-b318-de2fc61e64f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig"}}
	{"specversion":"1.0","id":"5745b3dc-f2e9-4e99-94c3-49a98f50051d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"00646045-ebdd-411d-b720-241df267a132","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"74858bd8-10ee-4f01-9d93-ee84ba32b467","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17585-1518/.minikube"}}
	{"specversion":"1.0","id":"06723e73-25be-4edd-9624-60b7ebe97cb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c65fa126-097b-4c8d-8e12-28aa80973d88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-534000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-534000
--- PASS: TestErrorJSONOutput (0.74s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-662000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-662000 --network=: (21.043301018s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-662000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-662000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-662000: (2.403301266s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.50s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-894000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-894000 --network=bridge: (20.584933123s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-894000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-894000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-894000: (2.216645377s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.85s)

                                                
                                    
x
+
TestKicExistingNetwork (23.33s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-046000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-046000 --network=existing-network: (20.739317144s)
helpers_test.go:175: Cleaning up "existing-network-046000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-046000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-046000: (2.245006131s)
--- PASS: TestKicExistingNetwork (23.33s)

                                                
                                    
x
+
TestKicCustomSubnet (23.14s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-797000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-797000 --subnet=192.168.60.0/24: (20.685898874s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-797000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-797000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-797000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-797000: (2.397978231s)
--- PASS: TestKicCustomSubnet (23.14s)

                                                
                                    
x
+
TestKicStaticIP (23.74s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-934000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-934000 --static-ip=192.168.200.200: (21.111843049s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-934000 ip
helpers_test.go:175: Cleaning up "static-ip-934000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-934000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-934000: (2.398589575s)
--- PASS: TestKicStaticIP (23.74s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (48.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-591000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-591000 --driver=docker : (20.556237302s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-594000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-594000 --driver=docker : (21.694917819s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-591000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-594000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-594000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-594000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-594000: (2.391078673s)
helpers_test.go:175: Cleaning up "first-591000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-591000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-591000: (2.413200939s)
--- PASS: TestMinikubeProfile (48.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-538000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-538000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.145102791s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-538000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-552000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-552000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.539977654s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.54s)

                                                
                                    
x
+
TestPreload (168.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-114000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E1107 16:09:06.720063    2089 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17585-1518/.minikube/profiles/functional-980000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-114000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m46.146848147s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-114000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-114000 image pull gcr.io/k8s-minikube/busybox: (1.604582363s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-114000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-114000: (10.830632444s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-114000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-114000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (47.564178076s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-114000 image list
helpers_test.go:175: Cleaning up "test-preload-114000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-114000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-114000: (2.45422407s)
--- PASS: TestPreload (168.89s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.06s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17585
- KUBECONFIG=/Users/jenkins/minikube-integration/17585-1518/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3375328928/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3375328928/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3375328928/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3375328928/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.06s)

                                                
                                    

Test skip (17/184)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 14.383099ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-m22hq" [b94d5c74-dd7c-4ab6-ae53-4a808782dbac] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015763501s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tpljh" [9fbc6da8-cf37-45e7-8ca9-cd3b95e934b3] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.01208541s
addons_test.go:339: (dbg) Run:  kubectl --context addons-533000 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-533000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-533000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.852868514s)
addons_test.go:354: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (13.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-533000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-533000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-533000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f1da6a9a-7501-4bde-bc0b-12f83ac5d196] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f1da6a9a-7501-4bde-bc0b-12f83ac5d196] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.096090348s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p addons-533000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:281: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.57s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-980000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-980000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-sp4x5" [e6aa4f06-ed62-4b40-b2ae-01bb72e8d3c6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-sp4x5" [e6aa4f06-ed62-4b40-b2ae-01bb72e8d3c6] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.013818537s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard