Test Report: Docker_macOS 17659

                    
                      6d6265e3ef99b46ca0d9a494272903ce35bc82a3:2023-11-22:31994
                    
                

Test fail (25/189)

x
+
TestOffline (753.86s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-201000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-201000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : exit status 52 (12m32.957844208s)

                                                
                                                
-- stdout --
	* [offline-docker-201000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node offline-docker-201000 in cluster offline-docker-201000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "offline-docker-201000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 22:00:12.582246    8629 out.go:296] Setting OutFile to fd 1 ...
	I1122 22:00:12.582551    8629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 22:00:12.582557    8629 out.go:309] Setting ErrFile to fd 2...
	I1122 22:00:12.582561    8629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 22:00:12.582745    8629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 22:00:12.584251    8629 out.go:303] Setting JSON to false
	I1122 22:00:12.607554    8629 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5386,"bootTime":1700713826,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1122 22:00:12.607658    8629 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1122 22:00:12.629061    8629 out.go:177] * [offline-docker-201000] minikube v1.32.0 on Darwin 14.1.1
	I1122 22:00:12.650205    8629 out.go:177]   - MINIKUBE_LOCATION=17659
	I1122 22:00:12.650281    8629 notify.go:220] Checking for updates...
	I1122 22:00:12.691881    8629 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	I1122 22:00:12.713073    8629 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1122 22:00:12.734055    8629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 22:00:12.754865    8629 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	I1122 22:00:12.775976    8629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 22:00:12.797274    8629 driver.go:378] Setting default libvirt URI to qemu:///system
	I1122 22:00:12.853162    8629 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.2 (129061)
	I1122 22:00:12.853318    8629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 22:00:13.040807    8629 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:148 SystemTime:2023-11-23 06:00:12.998403377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=
unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescriptio
n:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 22:00:13.123954    8629 out.go:177] * Using the docker driver based on user configuration
	I1122 22:00:13.144824    8629 start.go:298] selected driver: docker
	I1122 22:00:13.144843    8629 start.go:902] validating driver "docker" against <nil>
	I1122 22:00:13.144858    8629 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 22:00:13.147784    8629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 22:00:13.247013    8629 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:9 ContainersRunning:1 ContainersPaused:0 ContainersStopped:8 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:78 OomKillDisable:false NGoroutines:148 SystemTime:2023-11-23 06:00:13.236902146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=
unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescriptio
n:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 22:00:13.247234    8629 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1122 22:00:13.247416    8629 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 22:00:13.268006    8629 out.go:177] * Using Docker Desktop driver with root privileges
	I1122 22:00:13.289363    8629 cni.go:84] Creating CNI manager for ""
	I1122 22:00:13.289405    8629 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1122 22:00:13.289421    8629 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1122 22:00:13.289443    8629 start_flags.go:323] config:
	{Name:offline-docker-201000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-201000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 22:00:13.311007    8629 out.go:177] * Starting control plane node offline-docker-201000 in cluster offline-docker-201000
	I1122 22:00:13.353017    8629 cache.go:121] Beginning downloading kic base image for docker with docker
	I1122 22:00:13.395395    8629 out.go:177] * Pulling base image ...
	I1122 22:00:13.460378    8629 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 22:00:13.460463    8629 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1122 22:00:13.460484    8629 cache.go:56] Caching tarball of preloaded images
	I1122 22:00:13.460478    8629 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1122 22:00:13.460724    8629 preload.go:174] Found /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 22:00:13.460744    8629 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1122 22:00:13.462327    8629 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/offline-docker-201000/config.json ...
	I1122 22:00:13.462445    8629 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/offline-docker-201000/config.json: {Name:mk99328187afc0c77b6036d03aedce8d86fe8915 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 22:00:13.512753    8629 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1122 22:00:13.512775    8629 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1122 22:00:13.512794    8629 cache.go:194] Successfully downloaded all kic artifacts
	I1122 22:00:13.512838    8629 start.go:365] acquiring machines lock for offline-docker-201000: {Name:mk899b89ead87437238e1fdb6ff6834599a0d014 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 22:00:13.512991    8629 start.go:369] acquired machines lock for "offline-docker-201000" in 141.754µs
	I1122 22:00:13.513036    8629 start.go:93] Provisioning new machine with config: &{Name:offline-docker-201000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-201000 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1122 22:00:13.513100    8629 start.go:125] createHost starting for "" (driver="docker")
	I1122 22:00:13.555049    8629 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1122 22:00:13.555234    8629 start.go:159] libmachine.API.Create for "offline-docker-201000" (driver="docker")
	I1122 22:00:13.555261    8629 client.go:168] LocalClient.Create starting
	I1122 22:00:13.555392    8629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 22:00:13.555442    8629 main.go:141] libmachine: Decoding PEM data...
	I1122 22:00:13.555460    8629 main.go:141] libmachine: Parsing certificate...
	I1122 22:00:13.555538    8629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 22:00:13.555575    8629 main.go:141] libmachine: Decoding PEM data...
	I1122 22:00:13.555583    8629 main.go:141] libmachine: Parsing certificate...
	I1122 22:00:13.556185    8629 cli_runner.go:164] Run: docker network inspect offline-docker-201000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 22:00:13.665364    8629 cli_runner.go:211] docker network inspect offline-docker-201000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 22:00:13.665470    8629 network_create.go:281] running [docker network inspect offline-docker-201000] to gather additional debugging logs...
	I1122 22:00:13.665497    8629 cli_runner.go:164] Run: docker network inspect offline-docker-201000
	W1122 22:00:13.716747    8629 cli_runner.go:211] docker network inspect offline-docker-201000 returned with exit code 1
	I1122 22:00:13.716780    8629 network_create.go:284] error running [docker network inspect offline-docker-201000]: docker network inspect offline-docker-201000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-201000 not found
	I1122 22:00:13.716792    8629 network_create.go:286] output of [docker network inspect offline-docker-201000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-201000 not found
	
	** /stderr **
	I1122 22:00:13.716922    8629 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 22:00:13.812363    8629 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:00:13.812737    8629 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002382f10}
	I1122 22:00:13.812755    8629 network_create.go:124] attempt to create docker network offline-docker-201000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1122 22:00:13.812823    8629 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-201000 offline-docker-201000
	I1122 22:00:13.899965    8629 network_create.go:108] docker network offline-docker-201000 192.168.58.0/24 created
	I1122 22:00:13.900023    8629 kic.go:121] calculated static IP "192.168.58.2" for the "offline-docker-201000" container
	I1122 22:00:13.900167    8629 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 22:00:13.953489    8629 cli_runner.go:164] Run: docker volume create offline-docker-201000 --label name.minikube.sigs.k8s.io=offline-docker-201000 --label created_by.minikube.sigs.k8s.io=true
	I1122 22:00:14.007501    8629 oci.go:103] Successfully created a docker volume offline-docker-201000
	I1122 22:00:14.007622    8629 cli_runner.go:164] Run: docker run --rm --name offline-docker-201000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-201000 --entrypoint /usr/bin/test -v offline-docker-201000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 22:00:14.686242    8629 oci.go:107] Successfully prepared a docker volume offline-docker-201000
	I1122 22:00:14.686283    8629 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 22:00:14.686295    8629 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 22:00:14.686387    8629 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-201000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 22:06:13.559073    8629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 22:06:13.559205    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:06:13.613792    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:06:13.613918    8629 retry.go:31] will retry after 146.916336ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:13.761422    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:06:13.812751    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:06:13.812864    8629 retry.go:31] will retry after 364.993885ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:14.180232    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:06:14.232945    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:06:14.233055    8629 retry.go:31] will retry after 353.770489ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:14.587832    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:06:14.642733    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	W1122 22:06:14.642841    8629 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	
	W1122 22:06:14.642868    8629 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:14.642921    8629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 22:06:14.642978    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:06:14.692835    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:06:14.692950    8629 retry.go:31] will retry after 149.620246ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:14.844639    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:06:14.897768    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:06:14.897860    8629 retry.go:31] will retry after 254.870955ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:15.153109    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:06:15.206872    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:06:15.206967    8629 retry.go:31] will retry after 776.054383ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:15.985407    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:06:16.039683    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	W1122 22:06:16.039790    8629 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	
	W1122 22:06:16.039812    8629 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:16.039827    8629 start.go:128] duration metric: createHost completed in 6m2.524617125s
	I1122 22:06:16.039833    8629 start.go:83] releasing machines lock for "offline-docker-201000", held for 6m2.52473681s
	W1122 22:06:16.039847    8629 start.go:691] error starting host: creating host: create host timed out in 360.000000 seconds
	I1122 22:06:16.040272    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:16.089980    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:16.090032    8629 delete.go:82] Unable to get host status for offline-docker-201000, assuming it has already been deleted: state: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	W1122 22:06:16.090102    8629 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1122 22:06:16.090112    8629 start.go:706] Will try again in 5 seconds ...
	I1122 22:06:21.092533    8629 start.go:365] acquiring machines lock for offline-docker-201000: {Name:mk899b89ead87437238e1fdb6ff6834599a0d014 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 22:06:21.092730    8629 start.go:369] acquired machines lock for "offline-docker-201000" in 152.714µs
	I1122 22:06:21.092761    8629 start.go:96] Skipping create...Using existing machine configuration
	I1122 22:06:21.092777    8629 fix.go:54] fixHost starting: 
	I1122 22:06:21.093232    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:21.147874    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:21.147933    8629 fix.go:102] recreateIfNeeded on offline-docker-201000: state= err=unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:21.147957    8629 fix.go:107] machineExists: false. err=machine does not exist
	I1122 22:06:21.169625    8629 out.go:177] * docker "offline-docker-201000" container is missing, will recreate.
	I1122 22:06:21.190400    8629 delete.go:124] DEMOLISHING offline-docker-201000 ...
	I1122 22:06:21.190625    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:21.242328    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	W1122 22:06:21.242401    8629 stop.go:75] unable to get state: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:21.242418    8629 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:21.242837    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:21.292632    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:21.292684    8629 delete.go:82] Unable to get host status for offline-docker-201000, assuming it has already been deleted: state: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:21.292764    8629 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-201000
	W1122 22:06:21.342720    8629 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-201000 returned with exit code 1
	I1122 22:06:21.342771    8629 kic.go:371] could not find the container offline-docker-201000 to remove it. will try anyways
	I1122 22:06:21.342847    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:21.392626    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	W1122 22:06:21.392670    8629 oci.go:84] error getting container status, will try to delete anyways: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:21.392747    8629 cli_runner.go:164] Run: docker exec --privileged -t offline-docker-201000 /bin/bash -c "sudo init 0"
	W1122 22:06:21.443095    8629 cli_runner.go:211] docker exec --privileged -t offline-docker-201000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1122 22:06:21.443151    8629 oci.go:650] error shutdown offline-docker-201000: docker exec --privileged -t offline-docker-201000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:22.445445    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:22.499575    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:22.499627    8629 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:22.499639    8629 oci.go:664] temporary error: container offline-docker-201000 status is  but expect it to be exited
	I1122 22:06:22.499662    8629 retry.go:31] will retry after 272.175364ms: couldn't verify container is exited. %v: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:22.773728    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:22.824760    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:22.824805    8629 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:22.824817    8629 oci.go:664] temporary error: container offline-docker-201000 status is  but expect it to be exited
	I1122 22:06:22.824841    8629 retry.go:31] will retry after 994.823003ms: couldn't verify container is exited. %v: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:23.820410    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:23.872092    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:23.872155    8629 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:23.872177    8629 oci.go:664] temporary error: container offline-docker-201000 status is  but expect it to be exited
	I1122 22:06:23.872202    8629 retry.go:31] will retry after 1.261717935s: couldn't verify container is exited. %v: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:25.134404    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:25.188416    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:25.188463    8629 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:25.188475    8629 oci.go:664] temporary error: container offline-docker-201000 status is  but expect it to be exited
	I1122 22:06:25.188500    8629 retry.go:31] will retry after 2.250613592s: couldn't verify container is exited. %v: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:27.441459    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:27.495241    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:27.495289    8629 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:27.495298    8629 oci.go:664] temporary error: container offline-docker-201000 status is  but expect it to be exited
	I1122 22:06:27.495323    8629 retry.go:31] will retry after 1.5733315s: couldn't verify container is exited. %v: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:29.071018    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:29.126044    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:29.126121    8629 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:29.126130    8629 oci.go:664] temporary error: container offline-docker-201000 status is  but expect it to be exited
	I1122 22:06:29.126155    8629 retry.go:31] will retry after 3.438557882s: couldn't verify container is exited. %v: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:32.567155    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:32.623047    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:32.623093    8629 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:32.623102    8629 oci.go:664] temporary error: container offline-docker-201000 status is  but expect it to be exited
	I1122 22:06:32.623123    8629 retry.go:31] will retry after 4.638382149s: couldn't verify container is exited. %v: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:37.263887    8629 cli_runner.go:164] Run: docker container inspect offline-docker-201000 --format={{.State.Status}}
	W1122 22:06:37.317381    8629 cli_runner.go:211] docker container inspect offline-docker-201000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:37.317433    8629 oci.go:662] temporary error verifying shutdown: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:06:37.317446    8629 oci.go:664] temporary error: container offline-docker-201000 status is  but expect it to be exited
	I1122 22:06:37.317474    8629 oci.go:88] couldn't shut down offline-docker-201000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	 
	I1122 22:06:37.317548    8629 cli_runner.go:164] Run: docker rm -f -v offline-docker-201000
	I1122 22:06:37.368127    8629 cli_runner.go:164] Run: docker container inspect -f {{.Id}} offline-docker-201000
	W1122 22:06:37.417997    8629 cli_runner.go:211] docker container inspect -f {{.Id}} offline-docker-201000 returned with exit code 1
	I1122 22:06:37.418114    8629 cli_runner.go:164] Run: docker network inspect offline-docker-201000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 22:06:37.468489    8629 cli_runner.go:164] Run: docker network rm offline-docker-201000
	I1122 22:06:37.563422    8629 fix.go:114] Sleeping 1 second for extra luck!
	I1122 22:06:38.565630    8629 start.go:125] createHost starting for "" (driver="docker")
	I1122 22:06:38.587772    8629 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1122 22:06:38.587963    8629 start.go:159] libmachine.API.Create for "offline-docker-201000" (driver="docker")
	I1122 22:06:38.588002    8629 client.go:168] LocalClient.Create starting
	I1122 22:06:38.588253    8629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 22:06:38.588346    8629 main.go:141] libmachine: Decoding PEM data...
	I1122 22:06:38.588378    8629 main.go:141] libmachine: Parsing certificate...
	I1122 22:06:38.588451    8629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 22:06:38.588517    8629 main.go:141] libmachine: Decoding PEM data...
	I1122 22:06:38.588532    8629 main.go:141] libmachine: Parsing certificate...
	I1122 22:06:38.610264    8629 cli_runner.go:164] Run: docker network inspect offline-docker-201000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 22:06:38.662959    8629 cli_runner.go:211] docker network inspect offline-docker-201000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 22:06:38.663058    8629 network_create.go:281] running [docker network inspect offline-docker-201000] to gather additional debugging logs...
	I1122 22:06:38.663081    8629 cli_runner.go:164] Run: docker network inspect offline-docker-201000
	W1122 22:06:38.713477    8629 cli_runner.go:211] docker network inspect offline-docker-201000 returned with exit code 1
	I1122 22:06:38.713515    8629 network_create.go:284] error running [docker network inspect offline-docker-201000]: docker network inspect offline-docker-201000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network offline-docker-201000 not found
	I1122 22:06:38.713533    8629 network_create.go:286] output of [docker network inspect offline-docker-201000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network offline-docker-201000 not found
	
	** /stderr **
	I1122 22:06:38.713677    8629 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 22:06:38.765297    8629 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:06:38.766675    8629 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:06:38.768260    8629 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:06:38.769615    8629 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:06:38.770054    8629 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002286470}
	I1122 22:06:38.770068    8629 network_create.go:124] attempt to create docker network offline-docker-201000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I1122 22:06:38.770134    8629 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=offline-docker-201000 offline-docker-201000
	I1122 22:06:38.855738    8629 network_create.go:108] docker network offline-docker-201000 192.168.85.0/24 created
	I1122 22:06:38.855777    8629 kic.go:121] calculated static IP "192.168.85.2" for the "offline-docker-201000" container
	I1122 22:06:38.855884    8629 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 22:06:38.907332    8629 cli_runner.go:164] Run: docker volume create offline-docker-201000 --label name.minikube.sigs.k8s.io=offline-docker-201000 --label created_by.minikube.sigs.k8s.io=true
	I1122 22:06:38.957173    8629 oci.go:103] Successfully created a docker volume offline-docker-201000
	I1122 22:06:38.957294    8629 cli_runner.go:164] Run: docker run --rm --name offline-docker-201000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=offline-docker-201000 --entrypoint /usr/bin/test -v offline-docker-201000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 22:06:39.277359    8629 oci.go:107] Successfully prepared a docker volume offline-docker-201000
	I1122 22:06:39.277397    8629 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 22:06:39.277415    8629 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 22:06:39.277518    8629 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v offline-docker-201000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 22:12:38.585978    8629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 22:12:38.586107    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:38.638699    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:38.638812    8629 retry.go:31] will retry after 211.031025ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:38.850136    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:38.901489    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:38.901586    8629 retry.go:31] will retry after 289.764035ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:39.192601    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:39.245318    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:39.245415    8629 retry.go:31] will retry after 804.475654ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:40.052306    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:40.106767    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	W1122 22:12:40.106897    8629 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	
	W1122 22:12:40.106918    8629 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:40.106978    8629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 22:12:40.107043    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:40.157206    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:40.157320    8629 retry.go:31] will retry after 173.274791ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:40.332699    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:40.385502    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:40.385623    8629 retry.go:31] will retry after 225.08212ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:40.612668    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:40.665604    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:40.665699    8629 retry.go:31] will retry after 610.931985ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:41.277291    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:41.329807    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:41.329904    8629 retry.go:31] will retry after 641.784066ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:41.972593    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:42.023551    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	W1122 22:12:42.023660    8629 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	
	W1122 22:12:42.023681    8629 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:42.023704    8629 start.go:128] duration metric: createHost completed in 6m3.462563566s
	I1122 22:12:42.023774    8629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 22:12:42.023837    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:42.073371    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:42.073468    8629 retry.go:31] will retry after 171.223773ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:42.247067    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:42.298275    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:42.298370    8629 retry.go:31] will retry after 448.517877ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:42.747169    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:42.798489    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:42.798581    8629 retry.go:31] will retry after 305.969544ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:43.105470    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:43.157092    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:43.157197    8629 retry.go:31] will retry after 646.871485ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:43.804571    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:43.856349    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	W1122 22:12:43.856455    8629 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	
	W1122 22:12:43.856469    8629 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:43.856537    8629 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 22:12:43.856592    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:43.906332    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:43.906428    8629 retry.go:31] will retry after 230.398587ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:44.138417    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:44.192618    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:44.192720    8629 retry.go:31] will retry after 196.095603ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:44.390268    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:44.443733    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	I1122 22:12:44.443834    8629 retry.go:31] will retry after 830.827257ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:45.275566    8629 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000
	W1122 22:12:45.328420    8629 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000 returned with exit code 1
	W1122 22:12:45.328522    8629 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	
	W1122 22:12:45.328550    8629 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "offline-docker-201000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" offline-docker-201000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000
	I1122 22:12:45.328559    8629 fix.go:56] fixHost completed within 6m24.240285072s
	I1122 22:12:45.328566    8629 start.go:83] releasing machines lock for "offline-docker-201000", held for 6m24.2403232s
	W1122 22:12:45.328641    8629 out.go:239] * Failed to start docker container. Running "minikube delete -p offline-docker-201000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p offline-docker-201000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1122 22:12:45.372056    8629 out.go:177] 
	W1122 22:12:45.394364    8629 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1122 22:12:45.394433    8629 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1122 22:12:45.394459    8629 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1122 22:12:45.416269    8629 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-201000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  failed: exit status 52
panic.go:523: *** TestOffline FAILED at 2023-11-22 22:12:45.493096 -0800 PST m=+5875.573026749
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestOffline]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect offline-docker-201000
helpers_test.go:235: (dbg) docker inspect offline-docker-201000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "offline-docker-201000",
	        "Id": "ecd4c54f53d3878d0a5eac5513b969da151cf08193a7ed1838cfe7ad94422019",
	        "Created": "2023-11-23T06:06:38.81672396Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "offline-docker-201000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-201000 -n offline-docker-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-201000 -n offline-docker-201000: exit status 7 (106.357248ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 22:12:45.652310    9281 status.go:249] status error: host: state: unknown state "offline-docker-201000": docker container inspect offline-docker-201000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: offline-docker-201000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-201000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "offline-docker-201000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-201000
--- FAIL: TestOffline (753.86s)

                                                
                                    
x
+
TestCertOptions (7200.747s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-016000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E1122 22:27:31.217979    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 22:27:41.246969    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 22:27:58.187996    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 22:32:31.213803    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 22:32:58.181696    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestCertExpiration (9m28s)
	TestCertOptions (8m58s)
	TestNetworkPlugins (34m37s)
	TestNetworkPlugins/group (34m37s)

                                                
                                                
goroutine 2169 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 22 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc000522680, 0xc000b17b80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc000bf8000?, {0x526cf20, 0x2a, 0x2a}, {0x10b00e5?, 0xc0001900c0?, 0x528e6e0?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc000bf8000)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00008a6f0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00020b900)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 585 [syscall, 8 minutes]:
syscall.syscall6(0x1010585?, 0xc00205d8f8?, 0xc00205d7e8?, 0xc00205d918?, 0x100c00205d8e0?, 0x1000000000003?, 0x4d313e28?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc00205d890?, 0x1010905?, 0x90?, 0x3058800?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc000b6e580?, 0xc00205d8c4, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002bbc540)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002488840)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc000281380?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc000281380, 0xc002488840)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestCertOptions(0xc000281380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:49 +0x40e
testing.tRunner(0xc000281380, 0x3b37828)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1846 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000523ba0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000523ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc000523ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:98 +0x89
testing.tRunner(0xc000523ba0, 0x3b37930)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 132 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000bd3940, 0xc0000640c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 149
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 38 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1157 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 37
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1153 +0x171

                                                
                                                
goroutine 1860 [chan receive, 34 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc0006836c0, 0xc00220a0a8)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1778
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 131 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00224c9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 149
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 1847 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000523d40)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000523d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc000523d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:156 +0x86
testing.tRunner(0xc000523d40, 0x3b37958)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1848 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00219c000)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00219c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc00219c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:228 +0x39
testing.tRunner(0xc00219c000, 0x3b378d8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1862 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc01b67b1e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc01b67b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc01b67b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc01b67b1e0, 0xc0021a0100)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1860
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1780 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00219d040)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00219d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc00219d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc00219d040, 0x3b37920)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1865 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00211bba0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00211bba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00211bba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc00211bba0, 0xc0021a0300)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1860
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1861 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000192680)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000192680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000192680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc000192680, 0xc0021a0000)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1860
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1867 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002262340)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002262340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002262340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002262340, 0xc0021a0400)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1860
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1866 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002262000)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002262000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002262000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002262000, 0xc0021a0380)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1860
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1864 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00211ab60)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00211ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00211ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc00211ab60, 0xc0021a0280)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1860
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2139 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x4caeb0f0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00275a660?, 0xc0027b2af0?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00275a660, {0xc0027b2af0, 0x510, 0x510})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002ade150, {0xc0027b2af0?, 0xc002b3a468?, 0xc000086668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002826540, {0x3f847a0, 0xc002ade150})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f84820, 0xc002826540}, {0x3f847a0, 0xc002ade150}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 586
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 154 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000bd3910, 0x2d)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3f81720?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00224c840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000bd3940)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f85cc0, 0xc000b7fa40}, 0x1, 0xc0000640c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0001157d0?, 0x15e8745?, 0xc00224c9c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 132
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 155 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fa87b8, 0xc0000640c0}, 0xc001ff9f50, 0x2a33545?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3fa87b8, 0xc0000640c0}, 0xf8?, 0xc0006116b0?, 0xc000160700?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fa87b8?, 0xc0000640c0?}, 0xc000803d40?, 0x1137520?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x11383e5?, 0xc000803d40?, 0xc002046600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 132
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 156 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 155
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1779 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00219cd00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00219cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNoKubernetes(0xc00219cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/no_kubernetes_test.go:33 +0x36
testing.tRunner(0xc00219cd00, 0x3b37910)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2141 [select, 9 minutes]:
os/exec.(*Cmd).watchCtx(0xc002488580, 0xc002246600)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 586
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 2140 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x4caeaff8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00275a720?, 0xc0007a7463?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00275a720, {0xc0007a7463, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002ade170, {0xc0007a7463?, 0xc002b39e68?, 0xc002b39e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002826570, {0x3f847a0, 0xc002ade170})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f84820, 0xc002826570}, {0x3f847a0, 0xc002ade170}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002727560?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 586
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 1849 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00219c680)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00219c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestMissingContainerUpgrade(0xc00219c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:305 +0xb4
testing.tRunner(0xc00219c680, 0x3b378f0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1863 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00211a4e0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00211a4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00211a4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc00211a4e0, 0xc0021a0200)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1860
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1868 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002262680)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002262680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002262680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002262680, 0xc0021a0480)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1860
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2168 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc002488840, 0xc002246900)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 585
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1869 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc002262820)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc002262820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002262820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc002262820, 0xc0021a0500)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1860
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 908 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc002047bd0, 0x2c)
	/usr/local/go/src/runtime/sema.go:527 +0x159
sync.(*Cond).Wait(0x3f81720?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002670d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002047c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00010ff90?, {0x3f85cc0, 0xc000beb410}, 0x1, 0xc0000640c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000665b60?, 0x3b9aca00, 0x0, 0xd0?, 0x104471c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x117bcc5?, 0xc0008c7080?, 0xc002c0fce0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 886
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1835 [chan receive, 34 minutes]:
testing.(*testContext).waitParallel(0xc00070c870)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc000522b60)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000522b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc001ffe660?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc000522b60, 0x3b37950)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 586 [syscall, 9 minutes]:
syscall.syscall6(0x1010585?, 0xc000b27a98?, 0xc000b27988?, 0xc000b27ab8?, 0x100c000b27a80?, 0x1000000000003?, 0x4d313e28?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc000b27a30?, 0x1010905?, 0x90?, 0x3058800?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:43 +0x45
syscall.Wait4(0xc000b1f4c0?, 0xc000b27a64, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002bbc240)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002488580)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc000683380?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc000683380, 0xc002488580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestCertExpiration(0xc000683380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2d7
testing.tRunner(0xc000683380, 0x3b37820)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2167 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x4caea458, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00275ab40?, 0xc0007a7863?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00275ab40, {0xc0007a7863, 0x39d, 0x39d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002ade1a0, {0xc0007a7863?, 0xc001ff5668?, 0xc001ff5668?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002826870, {0x3f847a0, 0xc002ade1a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f84820, 0xc002826870}, {0x3f847a0, 0xc002ade1a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0029f24e0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 585
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 885 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002670e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 806
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2166 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x4caeb1e8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00275aa80?, 0xc0027b22e4?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00275aa80, {0xc0027b22e4, 0x51c, 0x51c})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002ade180, {0xc0027b22e4?, 0xc000be3800?, 0xc001ff4e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002826840, {0x3f847a0, 0xc002ade180})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f84820, 0xc002826840}, {0x3f847a0, 0xc002ade180}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002246840?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 585
	/usr/local/go/src/os/exec/exec.go:716 +0xa0a

                                                
                                                
goroutine 676 [IO wait, 115 minutes]:
internal/poll.runtime_pollWait(0x4caeae08, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc000765480?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000765480)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000765480)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0003032c0)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc0003032c0)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc000572780, {0x3f9bdc0, 0xc0003032c0})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc000572780)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0020d0680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 673
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x13a

                                                
                                                
goroutine 910 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 909
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1299 [select, 111 minutes]:
net/http.(*persistConn).writeLoop(0xc0029c85a0)
	/usr/local/go/src/net/http/transport.go:2421 +0xe5
created by net/http.(*Transport).dialConn in goroutine 1292
	/usr/local/go/src/net/http/transport.go:1777 +0x16f1

                                                
                                                
goroutine 1778 [chan receive, 34 minutes]:
testing.(*T).Run(0xc00219c340, {0x30ea934?, 0x3f3d5cb15cc?}, 0xc00220a0a8)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00219c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00219c340, 0x3b37908)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1211 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc00287e160, 0xc0028449c0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1210
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1228 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc0029cb340, 0xc002845ce0)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1227
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 886 [chan receive, 113 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002047c00, 0xc0000640c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 806
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cache.go:122 +0x594

                                                
                                                
goroutine 909 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fa87b8, 0xc0000640c0}, 0xc001ff7750, 0xc002524e98?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3fa87b8, 0xc0000640c0}, 0x1?, 0x1?, 0xc001ff77b8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fa87b8?, 0xc0000640c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001ff77d0?, 0x117bd27?, 0xc00016e380?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.28.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 886
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.28.4/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1298 [select, 111 minutes]:
net/http.(*persistConn).readLoop(0xc0029c85a0)
	/usr/local/go/src/net/http/transport.go:2238 +0xd25
created by net/http.(*Transport).dialConn in goroutine 1292
	/usr/local/go/src/net/http/transport.go:1776 +0x169f

                                                
                                                
goroutine 1240 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc0028da000, 0xc002703980)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 793
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                                
goroutine 1060 [chan send, 111 minutes]:
os/exec.(*Cmd).watchCtx(0xc0021c58c0, 0xc000065680)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1059
	/usr/local/go/src/os/exec/exec.go:743 +0x9c9

                                                
                                    
x
+
TestDockerFlags (755.88s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-889000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E1122 22:17:31.227402    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 22:17:58.198247    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 22:22:14.279533    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 22:22:31.223071    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 22:22:58.191738    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-889000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : exit status 52 (12m34.571075619s)

                                                
                                                
-- stdout --
	* [docker-flags-889000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node docker-flags-889000 in cluster docker-flags-889000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "docker-flags-889000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 22:13:16.103160    9424 out.go:296] Setting OutFile to fd 1 ...
	I1122 22:13:16.103356    9424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 22:13:16.103361    9424 out.go:309] Setting ErrFile to fd 2...
	I1122 22:13:16.103365    9424 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 22:13:16.103540    9424 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 22:13:16.104968    9424 out.go:303] Setting JSON to false
	I1122 22:13:16.127449    9424 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6170,"bootTime":1700713826,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1122 22:13:16.127561    9424 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1122 22:13:16.149790    9424 out.go:177] * [docker-flags-889000] minikube v1.32.0 on Darwin 14.1.1
	I1122 22:13:16.192203    9424 out.go:177]   - MINIKUBE_LOCATION=17659
	I1122 22:13:16.192286    9424 notify.go:220] Checking for updates...
	I1122 22:13:16.234045    9424 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	I1122 22:13:16.255281    9424 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1122 22:13:16.277301    9424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 22:13:16.299129    9424 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	I1122 22:13:16.320223    9424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 22:13:16.342995    9424 config.go:182] Loaded profile config "force-systemd-flag-958000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 22:13:16.343150    9424 driver.go:378] Setting default libvirt URI to qemu:///system
	I1122 22:13:16.399210    9424 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.2 (129061)
	I1122 22:13:16.399351    9424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 22:13:16.501379    9424 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:false NGoroutines:198 SystemTime:2023-11-23 06:13:16.49040689 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescripti
on:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 22:13:16.544047    9424 out.go:177] * Using the docker driver based on user configuration
	I1122 22:13:16.565985    9424 start.go:298] selected driver: docker
	I1122 22:13:16.566010    9424 start.go:902] validating driver "docker" against <nil>
	I1122 22:13:16.566028    9424 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 22:13:16.570375    9424 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 22:13:16.670788    9424 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:14 ContainersRunning:1 ContainersPaused:0 ContainersStopped:13 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:false NGoroutines:198 SystemTime:2023-11-23 06:13:16.661250384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescript
ion:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription
:Docker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 22:13:16.670985    9424 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1122 22:13:16.671158    9424 start_flags.go:926] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1122 22:13:16.693229    9424 out.go:177] * Using Docker Desktop driver with root privileges
	I1122 22:13:16.714976    9424 cni.go:84] Creating CNI manager for ""
	I1122 22:13:16.715018    9424 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1122 22:13:16.715034    9424 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1122 22:13:16.715049    9424 start_flags.go:323] config:
	{Name:docker-flags-889000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-889000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomai
n:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1122 22:13:16.757727    9424 out.go:177] * Starting control plane node docker-flags-889000 in cluster docker-flags-889000
	I1122 22:13:16.778966    9424 cache.go:121] Beginning downloading kic base image for docker with docker
	I1122 22:13:16.800626    9424 out.go:177] * Pulling base image ...
	I1122 22:13:16.843818    9424 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 22:13:16.843895    9424 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1122 22:13:16.843917    9424 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1122 22:13:16.843929    9424 cache.go:56] Caching tarball of preloaded images
	I1122 22:13:16.844167    9424 preload.go:174] Found /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 22:13:16.844185    9424 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1122 22:13:16.844948    9424 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/docker-flags-889000/config.json ...
	I1122 22:13:16.845179    9424 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/docker-flags-889000/config.json: {Name:mk650d7aa77e27a4e5dcad1420dedd7395f1c05c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 22:13:16.896619    9424 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1122 22:13:16.896641    9424 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1122 22:13:16.896661    9424 cache.go:194] Successfully downloaded all kic artifacts
	I1122 22:13:16.896704    9424 start.go:365] acquiring machines lock for docker-flags-889000: {Name:mk03a259e7a295df39be69f43b4a48c5cbe37b1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 22:13:16.896853    9424 start.go:369] acquired machines lock for "docker-flags-889000" in 136.501µs
	I1122 22:13:16.896878    9424 start.go:93] Provisioning new machine with config: &{Name:docker-flags-889000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-889000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1122 22:13:16.896953    9424 start.go:125] createHost starting for "" (driver="docker")
	I1122 22:13:16.940672    9424 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1122 22:13:16.941090    9424 start.go:159] libmachine.API.Create for "docker-flags-889000" (driver="docker")
	I1122 22:13:16.941144    9424 client.go:168] LocalClient.Create starting
	I1122 22:13:16.941324    9424 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 22:13:16.941413    9424 main.go:141] libmachine: Decoding PEM data...
	I1122 22:13:16.941445    9424 main.go:141] libmachine: Parsing certificate...
	I1122 22:13:16.941557    9424 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 22:13:16.941625    9424 main.go:141] libmachine: Decoding PEM data...
	I1122 22:13:16.941644    9424 main.go:141] libmachine: Parsing certificate...
	I1122 22:13:16.942652    9424 cli_runner.go:164] Run: docker network inspect docker-flags-889000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 22:13:16.992965    9424 cli_runner.go:211] docker network inspect docker-flags-889000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 22:13:16.993064    9424 network_create.go:281] running [docker network inspect docker-flags-889000] to gather additional debugging logs...
	I1122 22:13:16.993087    9424 cli_runner.go:164] Run: docker network inspect docker-flags-889000
	W1122 22:13:17.044759    9424 cli_runner.go:211] docker network inspect docker-flags-889000 returned with exit code 1
	I1122 22:13:17.044783    9424 network_create.go:284] error running [docker network inspect docker-flags-889000]: docker network inspect docker-flags-889000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-889000 not found
	I1122 22:13:17.044794    9424 network_create.go:286] output of [docker network inspect docker-flags-889000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-889000 not found
	
	** /stderr **
	I1122 22:13:17.044933    9424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 22:13:17.097133    9424 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:13:17.098673    9424 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:13:17.099031    9424 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0021c9c00}
	I1122 22:13:17.099045    9424 network_create.go:124] attempt to create docker network docker-flags-889000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1122 22:13:17.099112    9424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-889000 docker-flags-889000
	W1122 22:13:17.148714    9424 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-889000 docker-flags-889000 returned with exit code 1
	W1122 22:13:17.148750    9424 network_create.go:149] failed to create docker network docker-flags-889000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-889000 docker-flags-889000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1122 22:13:17.148774    9424 network_create.go:116] failed to create docker network docker-flags-889000 192.168.67.0/24, will retry: subnet is taken
	I1122 22:13:17.150161    9424 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:13:17.150539    9424 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002330b70}
	I1122 22:13:17.150552    9424 network_create.go:124] attempt to create docker network docker-flags-889000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1122 22:13:17.150614    9424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-889000 docker-flags-889000
	I1122 22:13:17.234965    9424 network_create.go:108] docker network docker-flags-889000 192.168.76.0/24 created
	I1122 22:13:17.235017    9424 kic.go:121] calculated static IP "192.168.76.2" for the "docker-flags-889000" container
	I1122 22:13:17.235141    9424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 22:13:17.287222    9424 cli_runner.go:164] Run: docker volume create docker-flags-889000 --label name.minikube.sigs.k8s.io=docker-flags-889000 --label created_by.minikube.sigs.k8s.io=true
	I1122 22:13:17.337699    9424 oci.go:103] Successfully created a docker volume docker-flags-889000
	I1122 22:13:17.337817    9424 cli_runner.go:164] Run: docker run --rm --name docker-flags-889000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-889000 --entrypoint /usr/bin/test -v docker-flags-889000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 22:13:17.700933    9424 oci.go:107] Successfully prepared a docker volume docker-flags-889000
	I1122 22:13:17.700976    9424 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 22:13:17.700989    9424 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 22:13:17.701078    9424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-889000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 22:19:16.937683    9424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 22:19:16.937827    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:19:16.992314    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:19:16.992442    9424 retry.go:31] will retry after 300.714074ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:17.295486    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:19:17.349967    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:19:17.350084    9424 retry.go:31] will retry after 188.733852ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:17.541273    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:19:17.595110    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:19:17.595206    9424 retry.go:31] will retry after 742.221001ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:18.339951    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:19:18.394971    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	W1122 22:19:18.395077    9424 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	
	W1122 22:19:18.395098    9424 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:18.395152    9424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 22:19:18.395217    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:19:18.445288    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:19:18.445382    9424 retry.go:31] will retry after 280.27897ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:18.727588    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:19:18.779009    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:19:18.779110    9424 retry.go:31] will retry after 227.674071ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:19.009175    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:19:19.060700    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:19:19.060798    9424 retry.go:31] will retry after 512.778163ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:19.575823    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:19:19.630092    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	W1122 22:19:19.630200    9424 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	
	W1122 22:19:19.630220    9424 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:19.630235    9424 start.go:128] duration metric: createHost completed in 6m2.739218704s
	I1122 22:19:19.630242    9424 start.go:83] releasing machines lock for "docker-flags-889000", held for 6m2.739331974s
	W1122 22:19:19.630256    9424 start.go:691] error starting host: creating host: create host timed out in 360.000000 seconds
	I1122 22:19:19.630681    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:19.680185    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:19.680244    9424 delete.go:82] Unable to get host status for docker-flags-889000, assuming it has already been deleted: state: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	W1122 22:19:19.680342    9424 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1122 22:19:19.680352    9424 start.go:706] Will try again in 5 seconds ...
	I1122 22:19:24.682512    9424 start.go:365] acquiring machines lock for docker-flags-889000: {Name:mk03a259e7a295df39be69f43b4a48c5cbe37b1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 22:19:24.683687    9424 start.go:369] acquired machines lock for "docker-flags-889000" in 1.127812ms
	I1122 22:19:24.683747    9424 start.go:96] Skipping create...Using existing machine configuration
	I1122 22:19:24.683764    9424 fix.go:54] fixHost starting: 
	I1122 22:19:24.684249    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:24.738107    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:24.738151    9424 fix.go:102] recreateIfNeeded on docker-flags-889000: state= err=unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:24.738170    9424 fix.go:107] machineExists: false. err=machine does not exist
	I1122 22:19:24.759962    9424 out.go:177] * docker "docker-flags-889000" container is missing, will recreate.
	I1122 22:19:24.802726    9424 delete.go:124] DEMOLISHING docker-flags-889000 ...
	I1122 22:19:24.802905    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:24.853725    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	W1122 22:19:24.853786    9424 stop.go:75] unable to get state: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:24.853820    9424 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:24.854216    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:24.903829    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:24.903893    9424 delete.go:82] Unable to get host status for docker-flags-889000, assuming it has already been deleted: state: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:24.903987    9424 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-889000
	W1122 22:19:24.953519    9424 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-889000 returned with exit code 1
	I1122 22:19:24.953559    9424 kic.go:371] could not find the container docker-flags-889000 to remove it. will try anyways
	I1122 22:19:24.953624    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:25.003018    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	W1122 22:19:25.003072    9424 oci.go:84] error getting container status, will try to delete anyways: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:25.003159    9424 cli_runner.go:164] Run: docker exec --privileged -t docker-flags-889000 /bin/bash -c "sudo init 0"
	W1122 22:19:25.052372    9424 cli_runner.go:211] docker exec --privileged -t docker-flags-889000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1122 22:19:25.052411    9424 oci.go:650] error shutdown docker-flags-889000: docker exec --privileged -t docker-flags-889000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:26.053008    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:26.126535    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:26.126589    9424 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:26.126599    9424 oci.go:664] temporary error: container docker-flags-889000 status is  but expect it to be exited
	I1122 22:19:26.126620    9424 retry.go:31] will retry after 415.891304ms: couldn't verify container is exited. %v: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:26.544852    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:26.599361    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:26.599412    9424 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:26.599423    9424 oci.go:664] temporary error: container docker-flags-889000 status is  but expect it to be exited
	I1122 22:19:26.599444    9424 retry.go:31] will retry after 676.626072ms: couldn't verify container is exited. %v: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:27.276908    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:27.331693    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:27.331743    9424 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:27.331758    9424 oci.go:664] temporary error: container docker-flags-889000 status is  but expect it to be exited
	I1122 22:19:27.331783    9424 retry.go:31] will retry after 985.491447ms: couldn't verify container is exited. %v: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:28.317998    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:28.370709    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:28.370756    9424 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:28.370768    9424 oci.go:664] temporary error: container docker-flags-889000 status is  but expect it to be exited
	I1122 22:19:28.370789    9424 retry.go:31] will retry after 1.451818443s: couldn't verify container is exited. %v: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:29.823415    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:29.877557    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:29.877605    9424 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:29.877621    9424 oci.go:664] temporary error: container docker-flags-889000 status is  but expect it to be exited
	I1122 22:19:29.877646    9424 retry.go:31] will retry after 2.965389213s: couldn't verify container is exited. %v: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:32.844033    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:32.898312    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:32.898359    9424 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:32.898369    9424 oci.go:664] temporary error: container docker-flags-889000 status is  but expect it to be exited
	I1122 22:19:32.898396    9424 retry.go:31] will retry after 3.293334367s: couldn't verify container is exited. %v: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:36.192448    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:36.245204    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:36.245252    9424 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:36.245261    9424 oci.go:664] temporary error: container docker-flags-889000 status is  but expect it to be exited
	I1122 22:19:36.245285    9424 retry.go:31] will retry after 6.861040078s: couldn't verify container is exited. %v: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:43.107744    9424 cli_runner.go:164] Run: docker container inspect docker-flags-889000 --format={{.State.Status}}
	W1122 22:19:43.159961    9424 cli_runner.go:211] docker container inspect docker-flags-889000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:43.160014    9424 oci.go:662] temporary error verifying shutdown: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:19:43.160024    9424 oci.go:664] temporary error: container docker-flags-889000 status is  but expect it to be exited
	I1122 22:19:43.160053    9424 oci.go:88] couldn't shut down docker-flags-889000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	 
	I1122 22:19:43.160125    9424 cli_runner.go:164] Run: docker rm -f -v docker-flags-889000
	I1122 22:19:43.210592    9424 cli_runner.go:164] Run: docker container inspect -f {{.Id}} docker-flags-889000
	W1122 22:19:43.260156    9424 cli_runner.go:211] docker container inspect -f {{.Id}} docker-flags-889000 returned with exit code 1
	I1122 22:19:43.260273    9424 cli_runner.go:164] Run: docker network inspect docker-flags-889000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 22:19:43.311427    9424 cli_runner.go:164] Run: docker network rm docker-flags-889000
	I1122 22:19:43.424711    9424 fix.go:114] Sleeping 1 second for extra luck!
	I1122 22:19:44.426851    9424 start.go:125] createHost starting for "" (driver="docker")
	I1122 22:19:44.450156    9424 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1122 22:19:44.450335    9424 start.go:159] libmachine.API.Create for "docker-flags-889000" (driver="docker")
	I1122 22:19:44.450380    9424 client.go:168] LocalClient.Create starting
	I1122 22:19:44.450575    9424 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 22:19:44.450667    9424 main.go:141] libmachine: Decoding PEM data...
	I1122 22:19:44.450691    9424 main.go:141] libmachine: Parsing certificate...
	I1122 22:19:44.450785    9424 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 22:19:44.450856    9424 main.go:141] libmachine: Decoding PEM data...
	I1122 22:19:44.450872    9424 main.go:141] libmachine: Parsing certificate...
	I1122 22:19:44.451686    9424 cli_runner.go:164] Run: docker network inspect docker-flags-889000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 22:19:44.504478    9424 cli_runner.go:211] docker network inspect docker-flags-889000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 22:19:44.504580    9424 network_create.go:281] running [docker network inspect docker-flags-889000] to gather additional debugging logs...
	I1122 22:19:44.504603    9424 cli_runner.go:164] Run: docker network inspect docker-flags-889000
	W1122 22:19:44.554846    9424 cli_runner.go:211] docker network inspect docker-flags-889000 returned with exit code 1
	I1122 22:19:44.554872    9424 network_create.go:284] error running [docker network inspect docker-flags-889000]: docker network inspect docker-flags-889000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-889000 not found
	I1122 22:19:44.554882    9424 network_create.go:286] output of [docker network inspect docker-flags-889000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-889000 not found
	
	** /stderr **
	I1122 22:19:44.555025    9424 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 22:19:44.606663    9424 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:19:44.608124    9424 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:19:44.609714    9424 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:19:44.611083    9424 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:19:44.612561    9424 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:19:44.612899    9424 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000617800}
	I1122 22:19:44.612911    9424 network_create.go:124] attempt to create docker network docker-flags-889000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I1122 22:19:44.612978    9424 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-889000 docker-flags-889000
	I1122 22:19:44.698645    9424 network_create.go:108] docker network docker-flags-889000 192.168.94.0/24 created
	I1122 22:19:44.698694    9424 kic.go:121] calculated static IP "192.168.94.2" for the "docker-flags-889000" container
	I1122 22:19:44.698815    9424 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 22:19:44.752960    9424 cli_runner.go:164] Run: docker volume create docker-flags-889000 --label name.minikube.sigs.k8s.io=docker-flags-889000 --label created_by.minikube.sigs.k8s.io=true
	I1122 22:19:44.802787    9424 oci.go:103] Successfully created a docker volume docker-flags-889000
	I1122 22:19:44.802908    9424 cli_runner.go:164] Run: docker run --rm --name docker-flags-889000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-889000 --entrypoint /usr/bin/test -v docker-flags-889000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 22:19:45.094383    9424 oci.go:107] Successfully prepared a docker volume docker-flags-889000
	I1122 22:19:45.094417    9424 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 22:19:45.094436    9424 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 22:19:45.094556    9424 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v docker-flags-889000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 22:25:44.446844    9424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 22:25:44.446971    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:44.503253    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:25:44.503364    9424 retry.go:31] will retry after 222.254167ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:44.726715    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:44.780203    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:25:44.780317    9424 retry.go:31] will retry after 544.048044ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:45.325258    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:45.378021    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:25:45.378143    9424 retry.go:31] will retry after 645.042035ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:46.025573    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:46.078833    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	W1122 22:25:46.078940    9424 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	
	W1122 22:25:46.078964    9424 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:46.079016    9424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 22:25:46.079088    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:46.131016    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:25:46.131115    9424 retry.go:31] will retry after 188.81472ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:46.322175    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:46.373381    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:25:46.373478    9424 retry.go:31] will retry after 515.820179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:46.889708    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:46.941694    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:25:46.941790    9424 retry.go:31] will retry after 453.760214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:47.397017    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:47.448088    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	W1122 22:25:47.448189    9424 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	
	W1122 22:25:47.448213    9424 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:47.448226    9424 start.go:128] duration metric: createHost completed in 6m3.027279292s
	I1122 22:25:47.448291    9424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 22:25:47.448352    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:47.497704    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:25:47.497801    9424 retry.go:31] will retry after 305.183167ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:47.803932    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:47.855063    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:25:47.855149    9424 retry.go:31] will retry after 217.472941ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:48.073580    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:48.127120    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:25:48.127208    9424 retry.go:31] will retry after 596.192505ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:48.724584    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:48.775147    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	W1122 22:25:48.775246    9424 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	
	W1122 22:25:48.775267    9424 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:48.775326    9424 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 22:25:48.775402    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:48.825087    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:25:48.825176    9424 retry.go:31] will retry after 215.875296ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:49.043199    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:49.095941    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:25:49.096032    9424 retry.go:31] will retry after 460.711134ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:49.559087    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:49.614623    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	I1122 22:25:49.614733    9424 retry.go:31] will retry after 767.182242ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:50.383659    9424 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000
	W1122 22:25:50.435624    9424 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000 returned with exit code 1
	W1122 22:25:50.435731    9424 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	
	W1122 22:25:50.435755    9424 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "docker-flags-889000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-889000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	I1122 22:25:50.435767    9424 fix.go:56] fixHost completed within 6m25.758327525s
	I1122 22:25:50.435774    9424 start.go:83] releasing machines lock for "docker-flags-889000", held for 6m25.758376557s
	W1122 22:25:50.435853    9424 out.go:239] * Failed to start docker container. Running "minikube delete -p docker-flags-889000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p docker-flags-889000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1122 22:25:50.479347    9424 out.go:177] 
	W1122 22:25:50.501615    9424 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1122 22:25:50.501668    9424 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1122 22:25:50.501723    9424 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1122 22:25:50.545403    9424 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-889000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-889000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-889000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 80 (207.68854ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_45ab9b4ee43b1ccee1cc1cad42a504b375b49bd8_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-889000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 80
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-889000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-889000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 80 (196.427231ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_0c4d48d3465e4cc08ca5bd2bd06b407509a1612b_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-889000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 80
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-889000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-11-22 22:25:51.024892 -0800 PST m=+6661.117739465
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerFlags]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect docker-flags-889000
helpers_test.go:235: (dbg) docker inspect docker-flags-889000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "docker-flags-889000",
	        "Id": "9cc0b9106ca2783442c4af1144fb37cce53477d4c5542e8a75bd35e9f6411548",
	        "Created": "2023-11-23T06:19:44.659392917Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "docker-flags-889000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-889000 -n docker-flags-889000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-889000 -n docker-flags-889000: exit status 7 (107.162931ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 22:25:51.184969   10058 status.go:249] status error: host: state: unknown state "docker-flags-889000": docker container inspect docker-flags-889000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: docker-flags-889000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-889000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "docker-flags-889000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-889000
--- FAIL: TestDockerFlags (755.88s)

                                                
                                    
x
+
TestForceSystemdFlag (755.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-958000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
E1122 22:12:58.202272    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-958000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : exit status 52 (12m34.348299579s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-958000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-flag-958000 in cluster force-systemd-flag-958000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-flag-958000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 22:12:46.438748    9305 out.go:296] Setting OutFile to fd 1 ...
	I1122 22:12:46.438976    9305 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 22:12:46.438983    9305 out.go:309] Setting ErrFile to fd 2...
	I1122 22:12:46.438987    9305 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 22:12:46.439167    9305 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 22:12:46.440651    9305 out.go:303] Setting JSON to false
	I1122 22:12:46.462996    9305 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6140,"bootTime":1700713826,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1122 22:12:46.463114    9305 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1122 22:12:46.485536    9305 out.go:177] * [force-systemd-flag-958000] minikube v1.32.0 on Darwin 14.1.1
	I1122 22:12:46.528948    9305 out.go:177]   - MINIKUBE_LOCATION=17659
	I1122 22:12:46.529105    9305 notify.go:220] Checking for updates...
	I1122 22:12:46.551052    9305 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	I1122 22:12:46.573167    9305 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1122 22:12:46.594712    9305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 22:12:46.616191    9305 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	I1122 22:12:46.659903    9305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 22:12:46.683764    9305 config.go:182] Loaded profile config "force-systemd-env-255000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 22:12:46.683934    9305 driver.go:378] Setting default libvirt URI to qemu:///system
	I1122 22:12:46.740897    9305 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.2 (129061)
	I1122 22:12:46.741031    9305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 22:12:46.840132    9305 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:false NGoroutines:188 SystemTime:2023-11-23 06:12:46.829885291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profil
e=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescript
ion:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription
:Docker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 22:12:46.861443    9305 out.go:177] * Using the docker driver based on user configuration
	I1122 22:12:46.905671    9305 start.go:298] selected driver: docker
	I1122 22:12:46.905706    9305 start.go:902] validating driver "docker" against <nil>
	I1122 22:12:46.905733    9305 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 22:12:46.910109    9305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 22:12:47.012025    9305 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:13 ContainersRunning:1 ContainersPaused:0 ContainersStopped:12 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:90 OomKillDisable:false NGoroutines:188 SystemTime:2023-11-23 06:12:47.00248505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescripti
on:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 22:12:47.012257    9305 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1122 22:12:47.012443    9305 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1122 22:12:47.034087    9305 out.go:177] * Using Docker Desktop driver with root privileges
	I1122 22:12:47.056017    9305 cni.go:84] Creating CNI manager for ""
	I1122 22:12:47.056059    9305 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1122 22:12:47.056077    9305 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1122 22:12:47.056095    9305 start_flags.go:323] config:
	{Name:force-systemd-flag-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 22:12:47.104658    9305 out.go:177] * Starting control plane node force-systemd-flag-958000 in cluster force-systemd-flag-958000
	I1122 22:12:47.126608    9305 cache.go:121] Beginning downloading kic base image for docker with docker
	I1122 22:12:47.148765    9305 out.go:177] * Pulling base image ...
	I1122 22:12:47.190423    9305 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 22:12:47.190484    9305 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1122 22:12:47.190498    9305 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1122 22:12:47.190533    9305 cache.go:56] Caching tarball of preloaded images
	I1122 22:12:47.190747    9305 preload.go:174] Found /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 22:12:47.190767    9305 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1122 22:12:47.191581    9305 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/force-systemd-flag-958000/config.json ...
	I1122 22:12:47.191799    9305 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/force-systemd-flag-958000/config.json: {Name:mk5e7bbc6f97f3c6c9b77b03da6336592f8f6a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 22:12:47.244109    9305 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1122 22:12:47.244152    9305 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1122 22:12:47.244171    9305 cache.go:194] Successfully downloaded all kic artifacts
	I1122 22:12:47.244221    9305 start.go:365] acquiring machines lock for force-systemd-flag-958000: {Name:mka1c82c5a4085ec8f753c513dd5bf9113c17b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 22:12:47.244383    9305 start.go:369] acquired machines lock for "force-systemd-flag-958000" in 149.248µs
	I1122 22:12:47.244408    9305 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-958000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1122 22:12:47.244480    9305 start.go:125] createHost starting for "" (driver="docker")
	I1122 22:12:47.288974    9305 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1122 22:12:47.289357    9305 start.go:159] libmachine.API.Create for "force-systemd-flag-958000" (driver="docker")
	I1122 22:12:47.289444    9305 client.go:168] LocalClient.Create starting
	I1122 22:12:47.289681    9305 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 22:12:47.289777    9305 main.go:141] libmachine: Decoding PEM data...
	I1122 22:12:47.289813    9305 main.go:141] libmachine: Parsing certificate...
	I1122 22:12:47.289940    9305 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 22:12:47.290010    9305 main.go:141] libmachine: Decoding PEM data...
	I1122 22:12:47.290027    9305 main.go:141] libmachine: Parsing certificate...
	I1122 22:12:47.290921    9305 cli_runner.go:164] Run: docker network inspect force-systemd-flag-958000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 22:12:47.342034    9305 cli_runner.go:211] docker network inspect force-systemd-flag-958000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 22:12:47.342139    9305 network_create.go:281] running [docker network inspect force-systemd-flag-958000] to gather additional debugging logs...
	I1122 22:12:47.342158    9305 cli_runner.go:164] Run: docker network inspect force-systemd-flag-958000
	W1122 22:12:47.393118    9305 cli_runner.go:211] docker network inspect force-systemd-flag-958000 returned with exit code 1
	I1122 22:12:47.393147    9305 network_create.go:284] error running [docker network inspect force-systemd-flag-958000]: docker network inspect force-systemd-flag-958000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-958000 not found
	I1122 22:12:47.393165    9305 network_create.go:286] output of [docker network inspect force-systemd-flag-958000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-958000 not found
	
	** /stderr **
	I1122 22:12:47.393305    9305 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 22:12:47.445050    9305 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:12:47.445433    9305 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002178e80}
	I1122 22:12:47.445448    9305 network_create.go:124] attempt to create docker network force-systemd-flag-958000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1122 22:12:47.445520    9305 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-958000 force-systemd-flag-958000
	I1122 22:12:47.531538    9305 network_create.go:108] docker network force-systemd-flag-958000 192.168.58.0/24 created
	I1122 22:12:47.531577    9305 kic.go:121] calculated static IP "192.168.58.2" for the "force-systemd-flag-958000" container
	I1122 22:12:47.531689    9305 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 22:12:47.584138    9305 cli_runner.go:164] Run: docker volume create force-systemd-flag-958000 --label name.minikube.sigs.k8s.io=force-systemd-flag-958000 --label created_by.minikube.sigs.k8s.io=true
	I1122 22:12:47.634574    9305 oci.go:103] Successfully created a docker volume force-systemd-flag-958000
	I1122 22:12:47.634685    9305 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-958000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-958000 --entrypoint /usr/bin/test -v force-systemd-flag-958000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 22:12:48.071801    9305 oci.go:107] Successfully prepared a docker volume force-systemd-flag-958000
	I1122 22:12:48.071863    9305 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 22:12:48.071876    9305 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 22:12:48.071992    9305 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-958000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 22:18:47.285891    9305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 22:18:47.286028    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:18:47.339320    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:18:47.339455    9305 retry.go:31] will retry after 270.776062ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:47.610936    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:18:47.662634    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:18:47.662731    9305 retry.go:31] will retry after 494.547388ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:48.159675    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:18:48.214516    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:18:48.214634    9305 retry.go:31] will retry after 773.056416ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:48.988153    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:18:49.041278    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	W1122 22:18:49.041386    9305 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	
	W1122 22:18:49.041409    9305 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:49.041466    9305 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 22:18:49.041524    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:18:49.091270    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:18:49.091363    9305 retry.go:31] will retry after 162.755859ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:49.255047    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:18:49.307717    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:18:49.307806    9305 retry.go:31] will retry after 312.067334ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:49.621154    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:18:49.673770    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:18:49.673866    9305 retry.go:31] will retry after 594.081852ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:50.268970    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:18:50.323807    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:18:50.323912    9305 retry.go:31] will retry after 478.96612ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:50.803226    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:18:50.854181    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	W1122 22:18:50.854283    9305 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	
	W1122 22:18:50.854302    9305 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:50.854326    9305 start.go:128] duration metric: createHost completed in 6m3.615834719s
	I1122 22:18:50.854335    9305 start.go:83] releasing machines lock for "force-systemd-flag-958000", held for 6m3.61594254s
	W1122 22:18:50.854349    9305 start.go:691] error starting host: creating host: create host timed out in 360.000000 seconds
	I1122 22:18:50.854808    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:18:50.904654    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	I1122 22:18:50.904706    9305 delete.go:82] Unable to get host status for force-systemd-flag-958000, assuming it has already been deleted: state: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	W1122 22:18:50.904785    9305 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1122 22:18:50.904800    9305 start.go:706] Will try again in 5 seconds ...
	I1122 22:18:55.906004    9305 start.go:365] acquiring machines lock for force-systemd-flag-958000: {Name:mka1c82c5a4085ec8f753c513dd5bf9113c17b54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 22:18:55.906297    9305 start.go:369] acquired machines lock for "force-systemd-flag-958000" in 188.741µs
	I1122 22:18:55.906340    9305 start.go:96] Skipping create...Using existing machine configuration
	I1122 22:18:55.906364    9305 fix.go:54] fixHost starting: 
	I1122 22:18:55.906856    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:18:55.958545    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	I1122 22:18:55.958591    9305 fix.go:102] recreateIfNeeded on force-systemd-flag-958000: state= err=unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:55.958610    9305 fix.go:107] machineExists: false. err=machine does not exist
	I1122 22:18:55.980373    9305 out.go:177] * docker "force-systemd-flag-958000" container is missing, will recreate.
	I1122 22:18:56.024058    9305 delete.go:124] DEMOLISHING force-systemd-flag-958000 ...
	I1122 22:18:56.024245    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:18:56.075527    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	W1122 22:18:56.075578    9305 stop.go:75] unable to get state: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:56.075599    9305 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:56.075973    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:18:56.125590    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	I1122 22:18:56.125660    9305 delete.go:82] Unable to get host status for force-systemd-flag-958000, assuming it has already been deleted: state: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:56.125743    9305 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-958000
	W1122 22:18:56.175526    9305 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-958000 returned with exit code 1
	I1122 22:18:56.175562    9305 kic.go:371] could not find the container force-systemd-flag-958000 to remove it. will try anyways
	I1122 22:18:56.175637    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:18:56.225500    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	W1122 22:18:56.225553    9305 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:56.225631    9305 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-958000 /bin/bash -c "sudo init 0"
	W1122 22:18:56.275053    9305 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-958000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1122 22:18:56.275082    9305 oci.go:650] error shutdown force-systemd-flag-958000: docker exec --privileged -t force-systemd-flag-958000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:57.277355    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:18:57.331101    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	I1122 22:18:57.331171    9305 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:57.331181    9305 oci.go:664] temporary error: container force-systemd-flag-958000 status is  but expect it to be exited
	I1122 22:18:57.331213    9305 retry.go:31] will retry after 522.895954ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:57.855386    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:18:57.910356    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	I1122 22:18:57.910411    9305 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:57.910425    9305 oci.go:664] temporary error: container force-systemd-flag-958000 status is  but expect it to be exited
	I1122 22:18:57.910450    9305 retry.go:31] will retry after 853.394192ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:58.765997    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:18:58.817359    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	I1122 22:18:58.817405    9305 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:58.817418    9305 oci.go:664] temporary error: container force-systemd-flag-958000 status is  but expect it to be exited
	I1122 22:18:58.817443    9305 retry.go:31] will retry after 797.365645ms: couldn't verify container is exited. %v: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:59.617146    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:18:59.669026    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	I1122 22:18:59.669082    9305 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:18:59.669091    9305 oci.go:664] temporary error: container force-systemd-flag-958000 status is  but expect it to be exited
	I1122 22:18:59.669113    9305 retry.go:31] will retry after 2.251401878s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:19:01.922187    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:19:01.975388    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:01.975436    9305 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:19:01.975451    9305 oci.go:664] temporary error: container force-systemd-flag-958000 status is  but expect it to be exited
	I1122 22:19:01.975474    9305 retry.go:31] will retry after 2.395271791s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:19:04.373125    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:19:04.427013    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:04.427064    9305 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:19:04.427073    9305 oci.go:664] temporary error: container force-systemd-flag-958000 status is  but expect it to be exited
	I1122 22:19:04.427099    9305 retry.go:31] will retry after 4.622499007s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:19:09.049800    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:19:09.102717    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:09.102770    9305 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:19:09.102780    9305 oci.go:664] temporary error: container force-systemd-flag-958000 status is  but expect it to be exited
	I1122 22:19:09.102804    9305 retry.go:31] will retry after 4.322638576s: couldn't verify container is exited. %v: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:19:13.427674    9305 cli_runner.go:164] Run: docker container inspect force-systemd-flag-958000 --format={{.State.Status}}
	W1122 22:19:13.480909    9305 cli_runner.go:211] docker container inspect force-systemd-flag-958000 --format={{.State.Status}} returned with exit code 1
	I1122 22:19:13.480963    9305 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:19:13.480973    9305 oci.go:664] temporary error: container force-systemd-flag-958000 status is  but expect it to be exited
	I1122 22:19:13.481001    9305 oci.go:88] couldn't shut down force-systemd-flag-958000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	 
	I1122 22:19:13.481083    9305 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-958000
	I1122 22:19:13.532782    9305 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-958000
	W1122 22:19:13.582544    9305 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-958000 returned with exit code 1
	I1122 22:19:13.582662    9305 cli_runner.go:164] Run: docker network inspect force-systemd-flag-958000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 22:19:13.632321    9305 cli_runner.go:164] Run: docker network rm force-systemd-flag-958000
	I1122 22:19:13.732864    9305 fix.go:114] Sleeping 1 second for extra luck!
	I1122 22:19:14.733341    9305 start.go:125] createHost starting for "" (driver="docker")
	I1122 22:19:14.756692    9305 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1122 22:19:14.756880    9305 start.go:159] libmachine.API.Create for "force-systemd-flag-958000" (driver="docker")
	I1122 22:19:14.756915    9305 client.go:168] LocalClient.Create starting
	I1122 22:19:14.757131    9305 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 22:19:14.757220    9305 main.go:141] libmachine: Decoding PEM data...
	I1122 22:19:14.757248    9305 main.go:141] libmachine: Parsing certificate...
	I1122 22:19:14.757331    9305 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 22:19:14.757403    9305 main.go:141] libmachine: Decoding PEM data...
	I1122 22:19:14.757419    9305 main.go:141] libmachine: Parsing certificate...
	I1122 22:19:14.779182    9305 cli_runner.go:164] Run: docker network inspect force-systemd-flag-958000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 22:19:14.833589    9305 cli_runner.go:211] docker network inspect force-systemd-flag-958000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 22:19:14.833689    9305 network_create.go:281] running [docker network inspect force-systemd-flag-958000] to gather additional debugging logs...
	I1122 22:19:14.833709    9305 cli_runner.go:164] Run: docker network inspect force-systemd-flag-958000
	W1122 22:19:14.883823    9305 cli_runner.go:211] docker network inspect force-systemd-flag-958000 returned with exit code 1
	I1122 22:19:14.883858    9305 network_create.go:284] error running [docker network inspect force-systemd-flag-958000]: docker network inspect force-systemd-flag-958000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-958000 not found
	I1122 22:19:14.883872    9305 network_create.go:286] output of [docker network inspect force-systemd-flag-958000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-958000 not found
	
	** /stderr **
	I1122 22:19:14.884022    9305 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 22:19:14.935799    9305 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:19:14.937264    9305 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:19:14.938825    9305 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:19:14.940421    9305 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:19:14.940877    9305 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00230bbf0}
	I1122 22:19:14.940894    9305 network_create.go:124] attempt to create docker network force-systemd-flag-958000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 65535 ...
	I1122 22:19:14.940976    9305 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-958000 force-systemd-flag-958000
	I1122 22:19:15.026079    9305 network_create.go:108] docker network force-systemd-flag-958000 192.168.85.0/24 created
	I1122 22:19:15.026116    9305 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-958000" container
	I1122 22:19:15.026224    9305 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 22:19:15.078724    9305 cli_runner.go:164] Run: docker volume create force-systemd-flag-958000 --label name.minikube.sigs.k8s.io=force-systemd-flag-958000 --label created_by.minikube.sigs.k8s.io=true
	I1122 22:19:15.128812    9305 oci.go:103] Successfully created a docker volume force-systemd-flag-958000
	I1122 22:19:15.128929    9305 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-958000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-958000 --entrypoint /usr/bin/test -v force-systemd-flag-958000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 22:19:15.416768    9305 oci.go:107] Successfully prepared a docker volume force-systemd-flag-958000
	I1122 22:19:15.416810    9305 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 22:19:15.416823    9305 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 22:19:15.416923    9305 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-958000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 22:25:14.753169    9305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 22:25:14.753308    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:14.806091    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:25:14.806214    9305 retry.go:31] will retry after 221.095323ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:15.029705    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:15.082199    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:25:15.082307    9305 retry.go:31] will retry after 351.675954ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:15.436410    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:15.490740    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:25:15.490858    9305 retry.go:31] will retry after 561.842671ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:16.055133    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:16.123381    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	W1122 22:25:16.123497    9305 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	
	W1122 22:25:16.123517    9305 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:16.123572    9305 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 22:25:16.123626    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:16.173119    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:25:16.173227    9305 retry.go:31] will retry after 209.357667ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:16.384354    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:16.455087    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:25:16.455179    9305 retry.go:31] will retry after 246.567659ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:16.703596    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:16.756077    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:25:16.756176    9305 retry.go:31] will retry after 506.815358ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:17.264367    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:17.315638    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	W1122 22:25:17.315765    9305 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	
	W1122 22:25:17.315783    9305 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:17.315793    9305 start.go:128] duration metric: createHost completed in 6m2.588290108s
	I1122 22:25:17.315862    9305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 22:25:17.315932    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:17.365364    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:25:17.365454    9305 retry.go:31] will retry after 309.162667ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:17.676957    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:17.732619    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:25:17.732711    9305 retry.go:31] will retry after 474.700937ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:18.208501    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:18.258490    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:25:18.258586    9305 retry.go:31] will retry after 787.880319ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:19.046938    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:19.099994    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	W1122 22:25:19.100094    9305 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	
	W1122 22:25:19.100107    9305 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:19.100179    9305 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 22:25:19.100244    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:19.150076    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:25:19.150165    9305 retry.go:31] will retry after 220.204481ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:19.371780    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:19.423054    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:25:19.423148    9305 retry.go:31] will retry after 277.923173ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:19.701422    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:19.753333    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	I1122 22:25:19.753434    9305 retry.go:31] will retry after 741.850751ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:20.497700    9305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000
	W1122 22:25:20.550114    9305 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000 returned with exit code 1
	W1122 22:25:20.550231    9305 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	
	W1122 22:25:20.550250    9305 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-flag-958000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-958000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	I1122 22:25:20.550265    9305 fix.go:56] fixHost completed within 6m24.650210898s
	I1122 22:25:20.550272    9305 start.go:83] releasing machines lock for "force-systemd-flag-958000", held for 6m24.650264576s
	W1122 22:25:20.550351    9305 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-958000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-958000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1122 22:25:20.593847    9305 out.go:177] 
	W1122 22:25:20.615858    9305 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1122 22:25:20.615908    9305 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1122 22:25:20.615928    9305 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1122 22:25:20.637759    9305 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-958000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-958000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-958000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (197.094274ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-958000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-11-22 22:25:20.932128 -0800 PST m=+6631.024482325
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-flag-958000
helpers_test.go:235: (dbg) docker inspect force-systemd-flag-958000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-flag-958000",
	        "Id": "afe3546be24d8ab4c8f3d11e3c3b0e911dc61840c41b561661f906c0c8fbac8b",
	        "Created": "2023-11-23T06:19:14.987466104Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-958000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-958000 -n force-systemd-flag-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-958000 -n force-systemd-flag-958000: exit status 7 (106.498438ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 22:25:21.090278    9935 status.go:249] status error: host: state: unknown state "force-systemd-flag-958000": docker container inspect force-systemd-flag-958000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-flag-958000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-958000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-flag-958000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-958000
--- FAIL: TestForceSystemdFlag (755.43s)

                                                
                                    
x
+
TestForceSystemdEnv (756.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-255000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E1122 22:02:31.234568    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 22:02:58.206537    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 22:05:34.291297    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 22:07:31.238157    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 22:07:58.207985    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 22:11:01.266078    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 22:12:31.233568    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-255000 --memory=2048 --alsologtostderr -v=5 --driver=docker : exit status 52 (12m35.907162232s)

                                                
                                                
-- stdout --
	* [force-systemd-env-255000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node force-systemd-env-255000 in cluster force-systemd-env-255000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "force-systemd-env-255000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 22:00:39.123916    8836 out.go:296] Setting OutFile to fd 1 ...
	I1122 22:00:39.124138    8836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 22:00:39.124143    8836 out.go:309] Setting ErrFile to fd 2...
	I1122 22:00:39.124147    8836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 22:00:39.124319    8836 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 22:00:39.125869    8836 out.go:303] Setting JSON to false
	I1122 22:00:39.148097    8836 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5413,"bootTime":1700713826,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1122 22:00:39.148188    8836 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1122 22:00:39.169962    8836 out.go:177] * [force-systemd-env-255000] minikube v1.32.0 on Darwin 14.1.1
	I1122 22:00:39.234724    8836 out.go:177]   - MINIKUBE_LOCATION=17659
	I1122 22:00:39.212409    8836 notify.go:220] Checking for updates...
	I1122 22:00:39.276316    8836 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	I1122 22:00:39.318559    8836 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1122 22:00:39.341374    8836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 22:00:39.361531    8836 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	I1122 22:00:39.382420    8836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1122 22:00:39.403937    8836 config.go:182] Loaded profile config "offline-docker-201000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 22:00:39.404027    8836 driver.go:378] Setting default libvirt URI to qemu:///system
	I1122 22:00:39.458780    8836 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.2 (129061)
	I1122 22:00:39.458918    8836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 22:00:39.557255    8836 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:158 SystemTime:2023-11-23 06:00:39.546933413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescripti
on:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 22:00:39.599343    8836 out.go:177] * Using the docker driver based on user configuration
	I1122 22:00:39.620632    8836 start.go:298] selected driver: docker
	I1122 22:00:39.620641    8836 start.go:902] validating driver "docker" against <nil>
	I1122 22:00:39.620651    8836 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 22:00:39.623480    8836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 22:00:39.721685    8836 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:10 ContainersRunning:1 ContainersPaused:0 ContainersStopped:9 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:81 OomKillDisable:false NGoroutines:158 SystemTime:2023-11-23 06:00:39.712291572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile
=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescripti
on:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:
Docker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 22:00:39.721857    8836 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1122 22:00:39.722035    8836 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1122 22:00:39.743651    8836 out.go:177] * Using Docker Desktop driver with root privileges
	I1122 22:00:39.765443    8836 cni.go:84] Creating CNI manager for ""
	I1122 22:00:39.765485    8836 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1122 22:00:39.765504    8836 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1122 22:00:39.765518    8836 start_flags.go:323] config:
	{Name:force-systemd-env-255000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-255000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 22:00:39.787511    8836 out.go:177] * Starting control plane node force-systemd-env-255000 in cluster force-systemd-env-255000
	I1122 22:00:39.809466    8836 cache.go:121] Beginning downloading kic base image for docker with docker
	I1122 22:00:39.831347    8836 out.go:177] * Pulling base image ...
	I1122 22:00:39.874524    8836 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 22:00:39.874594    8836 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1122 22:00:39.874617    8836 cache.go:56] Caching tarball of preloaded images
	I1122 22:00:39.874622    8836 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1122 22:00:39.874846    8836 preload.go:174] Found /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 22:00:39.874865    8836 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1122 22:00:39.875765    8836 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/force-systemd-env-255000/config.json ...
	I1122 22:00:39.875972    8836 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/force-systemd-env-255000/config.json: {Name:mk74f2bbc3fb1fe40a4665b646518b4c96dc94d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 22:00:39.927971    8836 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1122 22:00:39.927990    8836 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1122 22:00:39.928015    8836 cache.go:194] Successfully downloaded all kic artifacts
	I1122 22:00:39.928066    8836 start.go:365] acquiring machines lock for force-systemd-env-255000: {Name:mk873014cf516dd91b3c64a4cc9ba2890e8394cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 22:00:39.928211    8836 start.go:369] acquired machines lock for "force-systemd-env-255000" in 131.624µs
	I1122 22:00:39.928238    8836 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-255000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-255000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1122 22:00:39.928300    8836 start.go:125] createHost starting for "" (driver="docker")
	I1122 22:00:39.952052    8836 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1122 22:00:39.952439    8836 start.go:159] libmachine.API.Create for "force-systemd-env-255000" (driver="docker")
	I1122 22:00:39.952494    8836 client.go:168] LocalClient.Create starting
	I1122 22:00:39.952658    8836 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 22:00:39.952745    8836 main.go:141] libmachine: Decoding PEM data...
	I1122 22:00:39.952782    8836 main.go:141] libmachine: Parsing certificate...
	I1122 22:00:39.952894    8836 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 22:00:39.952961    8836 main.go:141] libmachine: Decoding PEM data...
	I1122 22:00:39.952978    8836 main.go:141] libmachine: Parsing certificate...
	I1122 22:00:39.954026    8836 cli_runner.go:164] Run: docker network inspect force-systemd-env-255000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 22:00:40.004541    8836 cli_runner.go:211] docker network inspect force-systemd-env-255000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 22:00:40.004637    8836 network_create.go:281] running [docker network inspect force-systemd-env-255000] to gather additional debugging logs...
	I1122 22:00:40.004652    8836 cli_runner.go:164] Run: docker network inspect force-systemd-env-255000
	W1122 22:00:40.054330    8836 cli_runner.go:211] docker network inspect force-systemd-env-255000 returned with exit code 1
	I1122 22:00:40.054361    8836 network_create.go:284] error running [docker network inspect force-systemd-env-255000]: docker network inspect force-systemd-env-255000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-255000 not found
	I1122 22:00:40.054373    8836 network_create.go:286] output of [docker network inspect force-systemd-env-255000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-255000 not found
	
	** /stderr **
	I1122 22:00:40.054532    8836 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 22:00:40.105641    8836 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:00:40.107039    8836 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:00:40.107460    8836 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002241690}
	I1122 22:00:40.107477    8836 network_create.go:124] attempt to create docker network force-systemd-env-255000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1122 22:00:40.107541    8836 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-255000 force-systemd-env-255000
	W1122 22:00:40.157388    8836 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-255000 force-systemd-env-255000 returned with exit code 1
	W1122 22:00:40.157425    8836 network_create.go:149] failed to create docker network force-systemd-env-255000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-255000 force-systemd-env-255000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1122 22:00:40.157448    8836 network_create.go:116] failed to create docker network force-systemd-env-255000 192.168.67.0/24, will retry: subnet is taken
	I1122 22:00:40.158807    8836 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:00:40.159201    8836 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00248cdd0}
	I1122 22:00:40.159213    8836 network_create.go:124] attempt to create docker network force-systemd-env-255000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1122 22:00:40.159287    8836 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-255000 force-systemd-env-255000
	I1122 22:00:40.243943    8836 network_create.go:108] docker network force-systemd-env-255000 192.168.76.0/24 created
	I1122 22:00:40.243989    8836 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-255000" container
	I1122 22:00:40.244103    8836 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 22:00:40.295688    8836 cli_runner.go:164] Run: docker volume create force-systemd-env-255000 --label name.minikube.sigs.k8s.io=force-systemd-env-255000 --label created_by.minikube.sigs.k8s.io=true
	I1122 22:00:40.346451    8836 oci.go:103] Successfully created a docker volume force-systemd-env-255000
	I1122 22:00:40.346564    8836 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-255000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-255000 --entrypoint /usr/bin/test -v force-systemd-env-255000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 22:00:40.704535    8836 oci.go:107] Successfully prepared a docker volume force-systemd-env-255000
	I1122 22:00:40.704570    8836 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 22:00:40.704582    8836 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 22:00:40.704680    8836 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-255000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 22:06:39.957000    8836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 22:06:39.957139    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:06:40.009832    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:06:40.009963    8836 retry.go:31] will retry after 226.788729ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:40.238946    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:06:40.290269    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:06:40.290363    8836 retry.go:31] will retry after 515.727201ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:40.808474    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:06:40.861216    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:06:40.861338    8836 retry.go:31] will retry after 328.701657ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:41.192484    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:06:41.244882    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	W1122 22:06:41.244989    8836 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	
	W1122 22:06:41.245010    8836 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:41.245075    8836 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 22:06:41.245135    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:06:41.294638    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:06:41.294733    8836 retry.go:31] will retry after 164.136234ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:41.459335    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:06:41.509972    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:06:41.510065    8836 retry.go:31] will retry after 228.957874ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:41.739881    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:06:41.794080    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:06:41.794177    8836 retry.go:31] will retry after 482.77735ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:42.278683    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:06:42.332951    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:06:42.333037    8836 retry.go:31] will retry after 606.426355ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:42.941883    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:06:43.009725    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	W1122 22:06:43.009819    8836 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	
	W1122 22:06:43.009834    8836 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:43.009854    8836 start.go:128] duration metric: createHost completed in 6m3.079440308s
	I1122 22:06:43.009863    8836 start.go:83] releasing machines lock for "force-systemd-env-255000", held for 6m3.079543711s
	W1122 22:06:43.009877    8836 start.go:691] error starting host: creating host: create host timed out in 360.000000 seconds
	I1122 22:06:43.010290    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:06:43.060856    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:43.060915    8836 delete.go:82] Unable to get host status for force-systemd-env-255000, assuming it has already been deleted: state: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	W1122 22:06:43.061004    8836 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1122 22:06:43.061013    8836 start.go:706] Will try again in 5 seconds ...
	I1122 22:06:48.063509    8836 start.go:365] acquiring machines lock for force-systemd-env-255000: {Name:mk873014cf516dd91b3c64a4cc9ba2890e8394cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 22:06:48.063742    8836 start.go:369] acquired machines lock for "force-systemd-env-255000" in 137.882µs
	I1122 22:06:48.063782    8836 start.go:96] Skipping create...Using existing machine configuration
	I1122 22:06:48.063797    8836 fix.go:54] fixHost starting: 
	I1122 22:06:48.064289    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:06:48.117292    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:48.117341    8836 fix.go:102] recreateIfNeeded on force-systemd-env-255000: state= err=unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:48.117363    8836 fix.go:107] machineExists: false. err=machine does not exist
	I1122 22:06:48.139123    8836 out.go:177] * docker "force-systemd-env-255000" container is missing, will recreate.
	I1122 22:06:48.183039    8836 delete.go:124] DEMOLISHING force-systemd-env-255000 ...
	I1122 22:06:48.183219    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:06:48.235573    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	W1122 22:06:48.235628    8836 stop.go:75] unable to get state: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:48.235648    8836 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:48.236019    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:06:48.284899    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:48.284956    8836 delete.go:82] Unable to get host status for force-systemd-env-255000, assuming it has already been deleted: state: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:48.285049    8836 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-255000
	W1122 22:06:48.334204    8836 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-255000 returned with exit code 1
	I1122 22:06:48.334241    8836 kic.go:371] could not find the container force-systemd-env-255000 to remove it. will try anyways
	I1122 22:06:48.334313    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:06:48.384014    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	W1122 22:06:48.384076    8836 oci.go:84] error getting container status, will try to delete anyways: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:48.384163    8836 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-env-255000 /bin/bash -c "sudo init 0"
	W1122 22:06:48.433909    8836 cli_runner.go:211] docker exec --privileged -t force-systemd-env-255000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1122 22:06:48.433946    8836 oci.go:650] error shutdown force-systemd-env-255000: docker exec --privileged -t force-systemd-env-255000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:49.434193    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:06:49.485598    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:49.485645    8836 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:49.485654    8836 oci.go:664] temporary error: container force-systemd-env-255000 status is  but expect it to be exited
	I1122 22:06:49.485677    8836 retry.go:31] will retry after 286.191165ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:49.772304    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:06:49.826548    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:49.826606    8836 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:49.826620    8836 oci.go:664] temporary error: container force-systemd-env-255000 status is  but expect it to be exited
	I1122 22:06:49.826642    8836 retry.go:31] will retry after 625.288526ms: couldn't verify container is exited. %v: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:50.453870    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:06:50.508809    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:50.508865    8836 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:50.508874    8836 oci.go:664] temporary error: container force-systemd-env-255000 status is  but expect it to be exited
	I1122 22:06:50.508899    8836 retry.go:31] will retry after 1.653518836s: couldn't verify container is exited. %v: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:52.162950    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:06:52.217688    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:52.217741    8836 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:52.217755    8836 oci.go:664] temporary error: container force-systemd-env-255000 status is  but expect it to be exited
	I1122 22:06:52.217785    8836 retry.go:31] will retry after 1.805800266s: couldn't verify container is exited. %v: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:54.025960    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:06:54.079987    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:54.080041    8836 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:54.080051    8836 oci.go:664] temporary error: container force-systemd-env-255000 status is  but expect it to be exited
	I1122 22:06:54.080079    8836 retry.go:31] will retry after 1.469942956s: couldn't verify container is exited. %v: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:55.551193    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:06:55.605310    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	I1122 22:06:55.605358    8836 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:06:55.605368    8836 oci.go:664] temporary error: container force-systemd-env-255000 status is  but expect it to be exited
	I1122 22:06:55.605392    8836 retry.go:31] will retry after 5.217756901s: couldn't verify container is exited. %v: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:07:00.825523    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:07:00.879091    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	I1122 22:07:00.879138    8836 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:07:00.879152    8836 oci.go:664] temporary error: container force-systemd-env-255000 status is  but expect it to be exited
	I1122 22:07:00.879176    8836 retry.go:31] will retry after 6.739355231s: couldn't verify container is exited. %v: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:07:07.619052    8836 cli_runner.go:164] Run: docker container inspect force-systemd-env-255000 --format={{.State.Status}}
	W1122 22:07:07.671742    8836 cli_runner.go:211] docker container inspect force-systemd-env-255000 --format={{.State.Status}} returned with exit code 1
	I1122 22:07:07.671793    8836 oci.go:662] temporary error verifying shutdown: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:07:07.671810    8836 oci.go:664] temporary error: container force-systemd-env-255000 status is  but expect it to be exited
	I1122 22:07:07.671839    8836 oci.go:88] couldn't shut down force-systemd-env-255000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	 
	I1122 22:07:07.671922    8836 cli_runner.go:164] Run: docker rm -f -v force-systemd-env-255000
	I1122 22:07:07.722764    8836 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-env-255000
	W1122 22:07:07.772932    8836 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-env-255000 returned with exit code 1
	I1122 22:07:07.773047    8836 cli_runner.go:164] Run: docker network inspect force-systemd-env-255000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 22:07:07.823446    8836 cli_runner.go:164] Run: docker network rm force-systemd-env-255000
	I1122 22:07:07.922836    8836 fix.go:114] Sleeping 1 second for extra luck!
	I1122 22:07:08.924545    8836 start.go:125] createHost starting for "" (driver="docker")
	I1122 22:07:08.946604    8836 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1122 22:07:08.946787    8836 start.go:159] libmachine.API.Create for "force-systemd-env-255000" (driver="docker")
	I1122 22:07:08.946827    8836 client.go:168] LocalClient.Create starting
	I1122 22:07:08.947057    8836 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 22:07:08.947143    8836 main.go:141] libmachine: Decoding PEM data...
	I1122 22:07:08.947185    8836 main.go:141] libmachine: Parsing certificate...
	I1122 22:07:08.947274    8836 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 22:07:08.947343    8836 main.go:141] libmachine: Decoding PEM data...
	I1122 22:07:08.947359    8836 main.go:141] libmachine: Parsing certificate...
	I1122 22:07:08.948271    8836 cli_runner.go:164] Run: docker network inspect force-systemd-env-255000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 22:07:09.000892    8836 cli_runner.go:211] docker network inspect force-systemd-env-255000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 22:07:09.000993    8836 network_create.go:281] running [docker network inspect force-systemd-env-255000] to gather additional debugging logs...
	I1122 22:07:09.001010    8836 cli_runner.go:164] Run: docker network inspect force-systemd-env-255000
	W1122 22:07:09.052132    8836 cli_runner.go:211] docker network inspect force-systemd-env-255000 returned with exit code 1
	I1122 22:07:09.052163    8836 network_create.go:284] error running [docker network inspect force-systemd-env-255000]: docker network inspect force-systemd-env-255000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-255000 not found
	I1122 22:07:09.052178    8836 network_create.go:286] output of [docker network inspect force-systemd-env-255000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-255000 not found
	
	** /stderr **
	I1122 22:07:09.052316    8836 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 22:07:09.104243    8836 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:07:09.105645    8836 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:07:09.107181    8836 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:07:09.108745    8836 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:07:09.110317    8836 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 22:07:09.110713    8836 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00214d9b0}
	I1122 22:07:09.110727    8836 network_create.go:124] attempt to create docker network force-systemd-env-255000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 65535 ...
	I1122 22:07:09.110798    8836 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-255000 force-systemd-env-255000
	I1122 22:07:09.195591    8836 network_create.go:108] docker network force-systemd-env-255000 192.168.94.0/24 created
	I1122 22:07:09.195631    8836 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-env-255000" container
	I1122 22:07:09.195772    8836 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 22:07:09.248163    8836 cli_runner.go:164] Run: docker volume create force-systemd-env-255000 --label name.minikube.sigs.k8s.io=force-systemd-env-255000 --label created_by.minikube.sigs.k8s.io=true
	I1122 22:07:09.297922    8836 oci.go:103] Successfully created a docker volume force-systemd-env-255000
	I1122 22:07:09.298032    8836 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-255000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-255000 --entrypoint /usr/bin/test -v force-systemd-env-255000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 22:07:09.592699    8836 oci.go:107] Successfully prepared a docker volume force-systemd-env-255000
	I1122 22:07:09.592748    8836 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 22:07:09.592763    8836 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 22:07:09.592862    8836 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-255000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 22:13:08.944070    8836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 22:13:08.944202    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:08.997097    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:13:08.997215    8836 retry.go:31] will retry after 264.805712ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:09.263144    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:09.316941    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:13:09.317065    8836 retry.go:31] will retry after 297.369358ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:09.615375    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:09.668426    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:13:09.668524    8836 retry.go:31] will retry after 637.400079ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:10.308416    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:10.360731    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	W1122 22:13:10.360854    8836 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	
	W1122 22:13:10.360877    8836 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:10.360937    8836 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 22:13:10.361001    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:10.410265    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:13:10.410365    8836 retry.go:31] will retry after 371.684935ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:10.783258    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:10.836421    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:13:10.836513    8836 retry.go:31] will retry after 349.222517ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:11.188164    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:11.243298    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:13:11.243396    8836 retry.go:31] will retry after 815.9455ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:12.059792    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:12.114184    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	W1122 22:13:12.114297    8836 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	
	W1122 22:13:12.114316    8836 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:12.114334    8836 start.go:128] duration metric: createHost completed in 6m3.19501754s
	I1122 22:13:12.114400    8836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 22:13:12.114461    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:12.164704    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:13:12.164803    8836 retry.go:31] will retry after 261.123593ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:12.428285    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:12.482460    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:13:12.482548    8836 retry.go:31] will retry after 289.953904ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:12.772737    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:12.822701    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:13:12.822792    8836 retry.go:31] will retry after 768.725561ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:13.592782    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:13.645302    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	W1122 22:13:13.645403    8836 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	
	W1122 22:13:13.645415    8836 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:13.645475    8836 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 22:13:13.645532    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:13.695660    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:13:13.695763    8836 retry.go:31] will retry after 254.056334ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:13.950373    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:14.005802    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:13:14.005897    8836 retry.go:31] will retry after 390.616121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:14.396924    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:14.449330    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	I1122 22:13:14.449417    8836 retry.go:31] will retry after 293.972324ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:14.745782    8836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000
	W1122 22:13:14.800399    8836 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000 returned with exit code 1
	W1122 22:13:14.800505    8836 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	
	W1122 22:13:14.800532    8836 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "force-systemd-env-255000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-255000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	I1122 22:13:14.800543    8836 fix.go:56] fixHost completed within 6m26.741925179s
	I1122 22:13:14.800552    8836 start.go:83] releasing machines lock for "force-systemd-env-255000", held for 6m26.741973053s
	W1122 22:13:14.800623    8836 out.go:239] * Failed to start docker container. Running "minikube delete -p force-systemd-env-255000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p force-systemd-env-255000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1122 22:13:14.843188    8836 out.go:177] 
	W1122 22:13:14.865228    8836 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1122 22:13:14.865278    8836 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1122 22:13:14.865319    8836 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1122 22:13:14.908264    8836 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-255000 --memory=2048 --alsologtostderr -v=5 --driver=docker " : exit status 52
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-255000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-255000 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (196.322243ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_ssh_bee0f26250c13d3e98e295459d643952c0091a53_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-255000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-11-22 22:13:15.181251 -0800 PST m=+5905.261705033
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-255000
helpers_test.go:235: (dbg) docker inspect force-systemd-env-255000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "force-systemd-env-255000",
	        "Id": "6c699fc41689d6ed9e51a7705ffd09130adc3f6e3d0857071dc4a318e17de646",
	        "Created": "2023-11-23T06:07:09.157387633Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-env-255000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-255000 -n force-systemd-env-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-255000 -n force-systemd-env-255000: exit status 7 (105.206442ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 22:13:15.339075    9400 status.go:249] status error: host: state: unknown state "force-systemd-env-255000": docker container inspect force-systemd-env-255000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: force-systemd-env-255000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-255000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "force-systemd-env-255000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-255000
--- FAIL: TestForceSystemdEnv (756.98s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (258.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-009000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1122 20:45:14.817786    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:47:30.968383    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:47:57.936851    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:47:57.942392    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:47:57.952711    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:47:57.974941    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:47:58.017137    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:47:58.098571    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:47:58.258746    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:47:58.579643    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:47:58.654939    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:47:59.221900    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:48:00.502889    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:48:03.063253    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:48:08.184135    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:48:18.424147    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:48:38.903857    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:49:19.863192    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-009000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m18.806437414s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-009000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-009000 in cluster ingress-addon-legacy-009000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 20:45:11.137289    4254 out.go:296] Setting OutFile to fd 1 ...
	I1122 20:45:11.137579    4254 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:45:11.137586    4254 out.go:309] Setting ErrFile to fd 2...
	I1122 20:45:11.137590    4254 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:45:11.137764    4254 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 20:45:11.139203    4254 out.go:303] Setting JSON to false
	I1122 20:45:11.161341    4254 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":885,"bootTime":1700713826,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1122 20:45:11.161462    4254 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1122 20:45:11.183582    4254 out.go:177] * [ingress-addon-legacy-009000] minikube v1.32.0 on Darwin 14.1.1
	I1122 20:45:11.225329    4254 out.go:177]   - MINIKUBE_LOCATION=17659
	I1122 20:45:11.225368    4254 notify.go:220] Checking for updates...
	I1122 20:45:11.267055    4254 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	I1122 20:45:11.288260    4254 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1122 20:45:11.330094    4254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 20:45:11.351629    4254 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	I1122 20:45:11.373525    4254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 20:45:11.396569    4254 driver.go:378] Setting default libvirt URI to qemu:///system
	I1122 20:45:11.452399    4254 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.2 (129061)
	I1122 20:45:11.452533    4254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 20:45:11.557710    4254 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:54 SystemTime:2023-11-23 04:45:11.548062265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 20:45:11.600887    4254 out.go:177] * Using the docker driver based on user configuration
	I1122 20:45:11.622793    4254 start.go:298] selected driver: docker
	I1122 20:45:11.622851    4254 start.go:902] validating driver "docker" against <nil>
	I1122 20:45:11.622870    4254 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 20:45:11.627298    4254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 20:45:11.731230    4254 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:54 SystemTime:2023-11-23 04:45:11.721457238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 20:45:11.731405    4254 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1122 20:45:11.731603    4254 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 20:45:11.753235    4254 out.go:177] * Using Docker Desktop driver with root privileges
	I1122 20:45:11.775405    4254 cni.go:84] Creating CNI manager for ""
	I1122 20:45:11.775446    4254 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1122 20:45:11.775461    4254 start_flags.go:323] config:
	{Name:ingress-addon-legacy-009000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-009000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 20:45:11.818241    4254 out.go:177] * Starting control plane node ingress-addon-legacy-009000 in cluster ingress-addon-legacy-009000
	I1122 20:45:11.840233    4254 cache.go:121] Beginning downloading kic base image for docker with docker
	I1122 20:45:11.862062    4254 out.go:177] * Pulling base image ...
	I1122 20:45:11.904235    4254 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1122 20:45:11.904337    4254 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1122 20:45:11.957336    4254 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1122 20:45:11.957376    4254 cache.go:56] Caching tarball of preloaded images
	I1122 20:45:11.957598    4254 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1122 20:45:11.979144    4254 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1122 20:45:11.958366    4254 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1122 20:45:12.021294    4254 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1122 20:45:12.021315    4254 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1122 20:45:12.114492    4254 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1122 20:45:16.610043    4254 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1122 20:45:16.610286    4254 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1122 20:45:17.233680    4254 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1122 20:45:17.233955    4254 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/config.json ...
	I1122 20:45:17.233978    4254 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/config.json: {Name:mk9fac610fb8e5dfdc3b8429a8a7324a5a7d09cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 20:45:17.234291    4254 cache.go:194] Successfully downloaded all kic artifacts
	I1122 20:45:17.234320    4254 start.go:365] acquiring machines lock for ingress-addon-legacy-009000: {Name:mk952e9cdc1a112f16768d2d7903bf59eb8c10c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 20:45:17.234451    4254 start.go:369] acquired machines lock for "ingress-addon-legacy-009000" in 123.662µs
	I1122 20:45:17.234469    4254 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-009000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-009000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1122 20:45:17.234515    4254 start.go:125] createHost starting for "" (driver="docker")
	I1122 20:45:17.258636    4254 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1122 20:45:17.258933    4254 start.go:159] libmachine.API.Create for "ingress-addon-legacy-009000" (driver="docker")
	I1122 20:45:17.258981    4254 client.go:168] LocalClient.Create starting
	I1122 20:45:17.259156    4254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 20:45:17.259251    4254 main.go:141] libmachine: Decoding PEM data...
	I1122 20:45:17.259288    4254 main.go:141] libmachine: Parsing certificate...
	I1122 20:45:17.259389    4254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 20:45:17.259458    4254 main.go:141] libmachine: Decoding PEM data...
	I1122 20:45:17.259476    4254 main.go:141] libmachine: Parsing certificate...
	I1122 20:45:17.279264    4254 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-009000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 20:45:17.332351    4254 cli_runner.go:211] docker network inspect ingress-addon-legacy-009000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 20:45:17.332479    4254 network_create.go:281] running [docker network inspect ingress-addon-legacy-009000] to gather additional debugging logs...
	I1122 20:45:17.332499    4254 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-009000
	W1122 20:45:17.382677    4254 cli_runner.go:211] docker network inspect ingress-addon-legacy-009000 returned with exit code 1
	I1122 20:45:17.382716    4254 network_create.go:284] error running [docker network inspect ingress-addon-legacy-009000]: docker network inspect ingress-addon-legacy-009000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-009000 not found
	I1122 20:45:17.382731    4254 network_create.go:286] output of [docker network inspect ingress-addon-legacy-009000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-009000 not found
	
	** /stderr **
	I1122 20:45:17.382897    4254 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 20:45:17.434168    4254 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00061ee30}
	I1122 20:45:17.434204    4254 network_create.go:124] attempt to create docker network ingress-addon-legacy-009000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I1122 20:45:17.434281    4254 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-009000 ingress-addon-legacy-009000
	I1122 20:45:17.520454    4254 network_create.go:108] docker network ingress-addon-legacy-009000 192.168.49.0/24 created
	I1122 20:45:17.520575    4254 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-009000" container
	I1122 20:45:17.520689    4254 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 20:45:17.571480    4254 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-009000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-009000 --label created_by.minikube.sigs.k8s.io=true
	I1122 20:45:17.623540    4254 oci.go:103] Successfully created a docker volume ingress-addon-legacy-009000
	I1122 20:45:17.623662    4254 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-009000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-009000 --entrypoint /usr/bin/test -v ingress-addon-legacy-009000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 20:45:18.062722    4254 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-009000
	I1122 20:45:18.062770    4254 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1122 20:45:18.062782    4254 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 20:45:18.062917    4254 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-009000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 20:45:20.451397    4254 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-009000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (2.388450087s)
	I1122 20:45:20.451424    4254 kic.go:203] duration metric: took 2.388693 seconds to extract preloaded images to volume
	I1122 20:45:20.451552    4254 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 20:45:20.553790    4254 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-009000 --name ingress-addon-legacy-009000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-009000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-009000 --network ingress-addon-legacy-009000 --ip 192.168.49.2 --volume ingress-addon-legacy-009000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1122 20:45:20.827255    4254 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-009000 --format={{.State.Running}}
	I1122 20:45:20.885277    4254 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-009000 --format={{.State.Status}}
	I1122 20:45:20.945294    4254 cli_runner.go:164] Run: docker exec ingress-addon-legacy-009000 stat /var/lib/dpkg/alternatives/iptables
	I1122 20:45:21.106926    4254 oci.go:144] the created container "ingress-addon-legacy-009000" has a running status.
	I1122 20:45:21.106958    4254 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/17659-904/.minikube/machines/ingress-addon-legacy-009000/id_rsa...
	I1122 20:45:21.208069    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/machines/ingress-addon-legacy-009000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1122 20:45:21.208135    4254 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17659-904/.minikube/machines/ingress-addon-legacy-009000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 20:45:21.275732    4254 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-009000 --format={{.State.Status}}
	I1122 20:45:21.338745    4254 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 20:45:21.338774    4254 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-009000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 20:45:21.435477    4254 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-009000 --format={{.State.Status}}
	I1122 20:45:21.488568    4254 machine.go:88] provisioning docker machine ...
	I1122 20:45:21.488627    4254 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-009000"
	I1122 20:45:21.488729    4254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:45:21.540926    4254 main.go:141] libmachine: Using SSH client type: native
	I1122 20:45:21.541256    4254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1405c40] 0x1408920 <nil>  [] 0s} 127.0.0.1 50457 <nil> <nil>}
	I1122 20:45:21.541271    4254 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-009000 && echo "ingress-addon-legacy-009000" | sudo tee /etc/hostname
	I1122 20:45:21.674685    4254 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-009000
	
	I1122 20:45:21.674785    4254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:45:21.727673    4254 main.go:141] libmachine: Using SSH client type: native
	I1122 20:45:21.728095    4254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1405c40] 0x1408920 <nil>  [] 0s} 127.0.0.1 50457 <nil> <nil>}
	I1122 20:45:21.728113    4254 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-009000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-009000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-009000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 20:45:21.854910    4254 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1122 20:45:21.854934    4254 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17659-904/.minikube CaCertPath:/Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17659-904/.minikube}
	I1122 20:45:21.854953    4254 ubuntu.go:177] setting up certificates
	I1122 20:45:21.854959    4254 provision.go:83] configureAuth start
	I1122 20:45:21.855045    4254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-009000
	I1122 20:45:21.906042    4254 provision.go:138] copyHostCerts
	I1122 20:45:21.906083    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17659-904/.minikube/ca.pem
	I1122 20:45:21.906151    4254 exec_runner.go:144] found /Users/jenkins/minikube-integration/17659-904/.minikube/ca.pem, removing ...
	I1122 20:45:21.906158    4254 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17659-904/.minikube/ca.pem
	I1122 20:45:21.906279    4254 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17659-904/.minikube/ca.pem (1082 bytes)
	I1122 20:45:21.906461    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17659-904/.minikube/cert.pem
	I1122 20:45:21.906493    4254 exec_runner.go:144] found /Users/jenkins/minikube-integration/17659-904/.minikube/cert.pem, removing ...
	I1122 20:45:21.906497    4254 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17659-904/.minikube/cert.pem
	I1122 20:45:21.906572    4254 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17659-904/.minikube/cert.pem (1123 bytes)
	I1122 20:45:21.906719    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17659-904/.minikube/key.pem
	I1122 20:45:21.906755    4254 exec_runner.go:144] found /Users/jenkins/minikube-integration/17659-904/.minikube/key.pem, removing ...
	I1122 20:45:21.906760    4254 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17659-904/.minikube/key.pem
	I1122 20:45:21.906828    4254 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17659-904/.minikube/key.pem (1679 bytes)
	I1122 20:45:21.906968    4254 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17659-904/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-009000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-009000]
	I1122 20:45:22.088974    4254 provision.go:172] copyRemoteCerts
	I1122 20:45:22.089093    4254 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 20:45:22.089208    4254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:45:22.144131    4254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50457 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/ingress-addon-legacy-009000/id_rsa Username:docker}
	I1122 20:45:22.235393    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 20:45:22.235473    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1122 20:45:22.255649    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 20:45:22.255721    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1122 20:45:22.275864    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 20:45:22.275930    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 20:45:22.296007    4254 provision.go:86] duration metric: configureAuth took 441.044631ms
	I1122 20:45:22.296032    4254 ubuntu.go:193] setting minikube options for container-runtime
	I1122 20:45:22.296173    4254 config.go:182] Loaded profile config "ingress-addon-legacy-009000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1122 20:45:22.296235    4254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:45:22.347927    4254 main.go:141] libmachine: Using SSH client type: native
	I1122 20:45:22.348231    4254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1405c40] 0x1408920 <nil>  [] 0s} 127.0.0.1 50457 <nil> <nil>}
	I1122 20:45:22.348264    4254 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1122 20:45:22.473447    4254 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1122 20:45:22.473469    4254 ubuntu.go:71] root file system type: overlay
	I1122 20:45:22.473564    4254 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1122 20:45:22.473651    4254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:45:22.525020    4254 main.go:141] libmachine: Using SSH client type: native
	I1122 20:45:22.525329    4254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1405c40] 0x1408920 <nil>  [] 0s} 127.0.0.1 50457 <nil> <nil>}
	I1122 20:45:22.525405    4254 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1122 20:45:22.659603    4254 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1122 20:45:22.659700    4254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:45:22.712217    4254 main.go:141] libmachine: Using SSH client type: native
	I1122 20:45:22.712518    4254 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1405c40] 0x1408920 <nil>  [] 0s} 127.0.0.1 50457 <nil> <nil>}
	I1122 20:45:22.712539    4254 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1122 20:45:23.275684    4254 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:22.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-11-23 04:45:22.657540972 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1122 20:45:23.275707    4254 machine.go:91] provisioned docker machine in 1.787142206s
	I1122 20:45:23.275714    4254 client.go:171] LocalClient.Create took 6.016857717s
	I1122 20:45:23.275729    4254 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-009000" took 6.016930273s
	I1122 20:45:23.275737    4254 start.go:300] post-start starting for "ingress-addon-legacy-009000" (driver="docker")
	I1122 20:45:23.275746    4254 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 20:45:23.275836    4254 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 20:45:23.275912    4254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:45:23.328925    4254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50457 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/ingress-addon-legacy-009000/id_rsa Username:docker}
	I1122 20:45:23.419671    4254 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 20:45:23.423619    4254 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 20:45:23.423643    4254 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1122 20:45:23.423650    4254 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1122 20:45:23.423656    4254 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1122 20:45:23.423667    4254 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17659-904/.minikube/addons for local assets ...
	I1122 20:45:23.423772    4254 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17659-904/.minikube/files for local assets ...
	I1122 20:45:23.423957    4254 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17659-904/.minikube/files/etc/ssl/certs/14852.pem -> 14852.pem in /etc/ssl/certs
	I1122 20:45:23.423965    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/files/etc/ssl/certs/14852.pem -> /etc/ssl/certs/14852.pem
	I1122 20:45:23.424162    4254 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 20:45:23.432346    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/files/etc/ssl/certs/14852.pem --> /etc/ssl/certs/14852.pem (1708 bytes)
	I1122 20:45:23.452679    4254 start.go:303] post-start completed in 176.93739ms
	I1122 20:45:23.453234    4254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-009000
	I1122 20:45:23.504674    4254 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/config.json ...
	I1122 20:45:23.505138    4254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 20:45:23.505206    4254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:45:23.556191    4254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50457 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/ingress-addon-legacy-009000/id_rsa Username:docker}
	I1122 20:45:23.643900    4254 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 20:45:23.648730    4254 start.go:128] duration metric: createHost completed in 6.414341744s
	I1122 20:45:23.648750    4254 start.go:83] releasing machines lock for "ingress-addon-legacy-009000", held for 6.41442922s
	I1122 20:45:23.648837    4254 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-009000
	I1122 20:45:23.699657    4254 ssh_runner.go:195] Run: cat /version.json
	I1122 20:45:23.699685    4254 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 20:45:23.699738    4254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:45:23.699788    4254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:45:23.757785    4254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50457 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/ingress-addon-legacy-009000/id_rsa Username:docker}
	I1122 20:45:23.757786    4254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50457 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/ingress-addon-legacy-009000/id_rsa Username:docker}
	I1122 20:45:23.951380    4254 ssh_runner.go:195] Run: systemctl --version
	I1122 20:45:23.956460    4254 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1122 20:45:23.961335    4254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1122 20:45:23.983329    4254 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1122 20:45:23.983394    4254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1122 20:45:23.998624    4254 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1122 20:45:24.013601    4254 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 20:45:24.013623    4254 start.go:472] detecting cgroup driver to use...
	I1122 20:45:24.013635    4254 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1122 20:45:24.013784    4254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 20:45:24.028997    4254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1122 20:45:24.038472    4254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 20:45:24.047928    4254 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1122 20:45:24.047988    4254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1122 20:45:24.057705    4254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 20:45:24.067172    4254 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 20:45:24.076401    4254 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 20:45:24.085810    4254 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 20:45:24.094623    4254 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 20:45:24.104355    4254 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 20:45:24.113035    4254 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 20:45:24.121309    4254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 20:45:24.176895    4254 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 20:45:24.255550    4254 start.go:472] detecting cgroup driver to use...
	I1122 20:45:24.255570    4254 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1122 20:45:24.255637    4254 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1122 20:45:24.269512    4254 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1122 20:45:24.269577    4254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 20:45:24.281359    4254 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 20:45:24.298445    4254 ssh_runner.go:195] Run: which cri-dockerd
	I1122 20:45:24.303417    4254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1122 20:45:24.313430    4254 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1122 20:45:24.331999    4254 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1122 20:45:24.424693    4254 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1122 20:45:24.516054    4254 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1122 20:45:24.516158    4254 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1122 20:45:24.533874    4254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 20:45:24.611694    4254 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1122 20:45:24.847865    4254 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1122 20:45:24.872416    4254 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1122 20:45:24.947987    4254 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I1122 20:45:24.948125    4254 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-009000 dig +short host.docker.internal
	I1122 20:45:25.063898    4254 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1122 20:45:25.063996    4254 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1122 20:45:25.068839    4254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 20:45:25.079732    4254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:45:25.131845    4254 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1122 20:45:25.131927    4254 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1122 20:45:25.152410    4254 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1122 20:45:25.152423    4254 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1122 20:45:25.152491    4254 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1122 20:45:25.161442    4254 ssh_runner.go:195] Run: which lz4
	I1122 20:45:25.165668    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1122 20:45:25.165783    4254 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1122 20:45:25.169979    4254 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1122 20:45:25.170002    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I1122 20:45:30.828019    4254 docker.go:635] Took 5.662398 seconds to copy over tarball
	I1122 20:45:30.828119    4254 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1122 20:45:32.492956    4254 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.664850244s)
	I1122 20:45:32.492975    4254 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1122 20:45:32.537856    4254 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1122 20:45:32.547622    4254 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1122 20:45:32.563441    4254 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 20:45:32.619118    4254 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1122 20:45:33.600687    4254 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1122 20:45:33.620484    4254 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1122 20:45:33.620504    4254 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1122 20:45:33.620522    4254 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1122 20:45:33.627976    4254 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1122 20:45:33.628024    4254 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 20:45:33.628094    4254 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1122 20:45:33.628419    4254 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1122 20:45:33.628578    4254 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1122 20:45:33.628761    4254 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1122 20:45:33.628785    4254 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1122 20:45:33.628887    4254 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1122 20:45:33.634127    4254 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1122 20:45:33.634207    4254 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1122 20:45:33.634321    4254 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 20:45:33.634365    4254 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1122 20:45:33.634388    4254 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1122 20:45:33.634446    4254 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1122 20:45:33.634570    4254 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1122 20:45:33.635474    4254 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1122 20:45:34.092907    4254 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1122 20:45:34.097831    4254 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1122 20:45:34.117159    4254 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1122 20:45:34.117207    4254 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1122 20:45:34.117275    4254 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1122 20:45:34.122784    4254 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1122 20:45:34.122811    4254 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1122 20:45:34.122876    4254 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1122 20:45:34.133409    4254 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1122 20:45:34.145003    4254 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1122 20:45:34.151952    4254 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1122 20:45:34.161433    4254 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1122 20:45:34.161464    4254 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I1122 20:45:34.161557    4254 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1122 20:45:34.181148    4254 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1122 20:45:34.193490    4254 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1122 20:45:34.212833    4254 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1122 20:45:34.212868    4254 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1122 20:45:34.212931    4254 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1122 20:45:34.231709    4254 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1122 20:45:34.232643    4254 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1122 20:45:34.250026    4254 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1122 20:45:34.250050    4254 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1122 20:45:34.250111    4254 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1122 20:45:34.269349    4254 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1122 20:45:34.271580    4254 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1122 20:45:34.289353    4254 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1122 20:45:34.289377    4254 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1122 20:45:34.289435    4254 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1122 20:45:34.309333    4254 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1122 20:45:34.357165    4254 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 20:45:34.414435    4254 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1122 20:45:34.433411    4254 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1122 20:45:34.433439    4254 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I1122 20:45:34.433519    4254 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1122 20:45:34.452202    4254 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1122 20:45:34.452248    4254 cache_images.go:92] LoadImages completed in 831.726752ms
	W1122 20:45:34.452296    4254 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17659-904/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17659-904/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I1122 20:45:34.452362    4254 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1122 20:45:34.502107    4254 cni.go:84] Creating CNI manager for ""
	I1122 20:45:34.502125    4254 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1122 20:45:34.502147    4254 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1122 20:45:34.502162    4254 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-009000 NodeName:ingress-addon-legacy-009000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1122 20:45:34.502267    4254 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-009000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 20:45:34.502336    4254 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-009000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-009000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1122 20:45:34.502406    4254 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1122 20:45:34.511763    4254 binaries.go:44] Found k8s binaries, skipping transfer
	I1122 20:45:34.511825    4254 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 20:45:34.520561    4254 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1122 20:45:34.536270    4254 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1122 20:45:34.552195    4254 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1122 20:45:34.567792    4254 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1122 20:45:34.571954    4254 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 20:45:34.582792    4254 certs.go:56] Setting up /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000 for IP: 192.168.49.2
	I1122 20:45:34.582811    4254 certs.go:190] acquiring lock for shared ca certs: {Name:mkfb15a700c9ffbadb2fa513d3b21f0bbc225601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 20:45:34.583070    4254 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17659-904/.minikube/ca.key
	I1122 20:45:34.583155    4254 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17659-904/.minikube/proxy-client-ca.key
	I1122 20:45:34.583199    4254 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/client.key
	I1122 20:45:34.583212    4254 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/client.crt with IP's: []
	I1122 20:45:34.733599    4254 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/client.crt ...
	I1122 20:45:34.733612    4254 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/client.crt: {Name:mk53c57748c7ff70fbbd88ea978b47be653ac7b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 20:45:34.733935    4254 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/client.key ...
	I1122 20:45:34.733949    4254 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/client.key: {Name:mk18897eb881630acf50e39b5339a4d9b1ce0e1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 20:45:34.734170    4254 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.key.dd3b5fb2
	I1122 20:45:34.734186    4254 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1122 20:45:34.769829    4254 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.crt.dd3b5fb2 ...
	I1122 20:45:34.769836    4254 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.crt.dd3b5fb2: {Name:mk9919a66de96f95fc17703890a4641de85ff7f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 20:45:34.770054    4254 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.key.dd3b5fb2 ...
	I1122 20:45:34.770062    4254 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.key.dd3b5fb2: {Name:mke02f91f8008e0d624cc8a73a26751d120f987c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 20:45:34.770272    4254 certs.go:337] copying /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.crt
	I1122 20:45:34.770446    4254 certs.go:341] copying /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.key
	I1122 20:45:34.770611    4254 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/proxy-client.key
	I1122 20:45:34.770629    4254 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/proxy-client.crt with IP's: []
	I1122 20:45:34.981210    4254 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/proxy-client.crt ...
	I1122 20:45:34.981225    4254 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/proxy-client.crt: {Name:mk79e0ca4e6e98a376772963aca48ec70a26298d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 20:45:34.981506    4254 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/proxy-client.key ...
	I1122 20:45:34.981515    4254 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/proxy-client.key: {Name:mk4df91d76990f74ce8aaaae619a4df3d97bb516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 20:45:34.981731    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 20:45:34.981763    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 20:45:34.981787    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 20:45:34.981805    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 20:45:34.981823    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 20:45:34.981841    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 20:45:34.981858    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 20:45:34.981875    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 20:45:34.981989    4254 certs.go:437] found cert: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/Users/jenkins/minikube-integration/17659-904/.minikube/certs/1485.pem (1338 bytes)
	W1122 20:45:34.982046    4254 certs.go:433] ignoring /Users/jenkins/minikube-integration/17659-904/.minikube/certs/Users/jenkins/minikube-integration/17659-904/.minikube/certs/1485_empty.pem, impossibly tiny 0 bytes
	I1122 20:45:34.982056    4254 certs.go:437] found cert: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca-key.pem (1675 bytes)
	I1122 20:45:34.982091    4254 certs.go:437] found cert: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem (1082 bytes)
	I1122 20:45:34.982121    4254 certs.go:437] found cert: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem (1123 bytes)
	I1122 20:45:34.982154    4254 certs.go:437] found cert: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/Users/jenkins/minikube-integration/17659-904/.minikube/certs/key.pem (1679 bytes)
	I1122 20:45:34.982219    4254 certs.go:437] found cert: /Users/jenkins/minikube-integration/17659-904/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17659-904/.minikube/files/etc/ssl/certs/14852.pem (1708 bytes)
	I1122 20:45:34.982261    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 20:45:34.982281    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/certs/1485.pem -> /usr/share/ca-certificates/1485.pem
	I1122 20:45:34.982299    4254 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17659-904/.minikube/files/etc/ssl/certs/14852.pem -> /usr/share/ca-certificates/14852.pem
	I1122 20:45:34.982776    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1122 20:45:35.005544    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 20:45:35.026060    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 20:45:35.046807    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/ingress-addon-legacy-009000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 20:45:35.068480    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 20:45:35.090406    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 20:45:35.112111    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 20:45:35.134388    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1122 20:45:35.156739    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 20:45:35.177885    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/certs/1485.pem --> /usr/share/ca-certificates/1485.pem (1338 bytes)
	I1122 20:45:35.199257    4254 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17659-904/.minikube/files/etc/ssl/certs/14852.pem --> /usr/share/ca-certificates/14852.pem (1708 bytes)
	I1122 20:45:35.220599    4254 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 20:45:35.236897    4254 ssh_runner.go:195] Run: openssl version
	I1122 20:45:35.242745    4254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 20:45:35.251947    4254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 20:45:35.256186    4254 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 23 04:35 /usr/share/ca-certificates/minikubeCA.pem
	I1122 20:45:35.256230    4254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 20:45:35.262703    4254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 20:45:35.272215    4254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1485.pem && ln -fs /usr/share/ca-certificates/1485.pem /etc/ssl/certs/1485.pem"
	I1122 20:45:35.282024    4254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1485.pem
	I1122 20:45:35.286311    4254 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 23 04:40 /usr/share/ca-certificates/1485.pem
	I1122 20:45:35.286357    4254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1485.pem
	I1122 20:45:35.293340    4254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1485.pem /etc/ssl/certs/51391683.0"
	I1122 20:45:35.302571    4254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14852.pem && ln -fs /usr/share/ca-certificates/14852.pem /etc/ssl/certs/14852.pem"
	I1122 20:45:35.312043    4254 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14852.pem
	I1122 20:45:35.316361    4254 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 23 04:40 /usr/share/ca-certificates/14852.pem
	I1122 20:45:35.316409    4254 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14852.pem
	I1122 20:45:35.322953    4254 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 20:45:35.332165    4254 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1122 20:45:35.336311    4254 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1122 20:45:35.336356    4254 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-009000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-009000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 20:45:35.336452    4254 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1122 20:45:35.355275    4254 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 20:45:35.364286    4254 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 20:45:35.372597    4254 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1122 20:45:35.372725    4254 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 20:45:35.381292    4254 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 20:45:35.381325    4254 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 20:45:35.430564    4254 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1122 20:45:35.430642    4254 kubeadm.go:322] [preflight] Running pre-flight checks
	I1122 20:45:35.669820    4254 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 20:45:35.669912    4254 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 20:45:35.669986    4254 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1122 20:45:35.842729    4254 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 20:45:35.843432    4254 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 20:45:35.843517    4254 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1122 20:45:35.919186    4254 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 20:45:35.940665    4254 out.go:204]   - Generating certificates and keys ...
	I1122 20:45:35.940756    4254 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1122 20:45:35.940841    4254 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1122 20:45:35.997199    4254 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 20:45:36.119104    4254 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1122 20:45:36.261629    4254 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1122 20:45:36.427949    4254 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1122 20:45:36.651483    4254 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1122 20:45:36.651670    4254 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-009000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1122 20:45:36.754425    4254 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1122 20:45:36.754543    4254 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-009000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1122 20:45:36.828388    4254 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 20:45:37.198155    4254 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 20:45:37.323632    4254 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1122 20:45:37.323787    4254 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 20:45:37.389631    4254 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 20:45:37.465532    4254 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 20:45:37.502010    4254 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 20:45:37.801589    4254 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 20:45:37.802203    4254 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 20:45:37.844780    4254 out.go:204]   - Booting up control plane ...
	I1122 20:45:37.844982    4254 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 20:45:37.845087    4254 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 20:45:37.845193    4254 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 20:45:37.845291    4254 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 20:45:37.845479    4254 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1122 20:46:17.811765    4254 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1122 20:46:17.812404    4254 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1122 20:46:17.812589    4254 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1122 20:46:22.814273    4254 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1122 20:46:22.814487    4254 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1122 20:46:32.815510    4254 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1122 20:46:32.815717    4254 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1122 20:46:52.815698    4254 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1122 20:46:52.815869    4254 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1122 20:47:32.816857    4254 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1122 20:47:32.817167    4254 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1122 20:47:32.817221    4254 kubeadm.go:322] 
	I1122 20:47:32.817272    4254 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1122 20:47:32.817348    4254 kubeadm.go:322] 		timed out waiting for the condition
	I1122 20:47:32.817362    4254 kubeadm.go:322] 
	I1122 20:47:32.817408    4254 kubeadm.go:322] 	This error is likely caused by:
	I1122 20:47:32.817468    4254 kubeadm.go:322] 		- The kubelet is not running
	I1122 20:47:32.817643    4254 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1122 20:47:32.817659    4254 kubeadm.go:322] 
	I1122 20:47:32.817771    4254 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1122 20:47:32.817827    4254 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1122 20:47:32.817859    4254 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1122 20:47:32.817865    4254 kubeadm.go:322] 
	I1122 20:47:32.818004    4254 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1122 20:47:32.818147    4254 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1122 20:47:32.818157    4254 kubeadm.go:322] 
	I1122 20:47:32.818272    4254 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1122 20:47:32.818330    4254 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1122 20:47:32.818425    4254 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1122 20:47:32.818464    4254 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1122 20:47:32.818472    4254 kubeadm.go:322] 
	I1122 20:47:32.819899    4254 kubeadm.go:322] W1123 04:45:35.429823    1702 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1122 20:47:32.820100    4254 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1122 20:47:32.820183    4254 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1122 20:47:32.820293    4254 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1122 20:47:32.820393    4254 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 20:47:32.820493    4254 kubeadm.go:322] W1123 04:45:37.807310    1702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1122 20:47:32.820597    4254 kubeadm.go:322] W1123 04:45:37.808239    1702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1122 20:47:32.820672    4254 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1122 20:47:32.820760    4254 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1122 20:47:32.820847    4254 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-009000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-009000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1123 04:45:35.429823    1702 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1123 04:45:37.807310    1702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1123 04:45:37.808239    1702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-009000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-009000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1123 04:45:35.429823    1702 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1123 04:45:37.807310    1702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1123 04:45:37.808239    1702 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1122 20:47:32.820886    4254 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1122 20:47:33.229241    4254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 20:47:33.239862    4254 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1122 20:47:33.239917    4254 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 20:47:33.248356    4254 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 20:47:33.248379    4254 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 20:47:33.295166    4254 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1122 20:47:33.295210    4254 kubeadm.go:322] [preflight] Running pre-flight checks
	I1122 20:47:33.520624    4254 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 20:47:33.520710    4254 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 20:47:33.520801    4254 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1122 20:47:33.690432    4254 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 20:47:33.691134    4254 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 20:47:33.691200    4254 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1122 20:47:33.765494    4254 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 20:47:33.787099    4254 out.go:204]   - Generating certificates and keys ...
	I1122 20:47:33.787173    4254 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1122 20:47:33.787242    4254 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1122 20:47:33.787316    4254 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1122 20:47:33.787409    4254 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1122 20:47:33.787520    4254 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1122 20:47:33.787584    4254 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1122 20:47:33.787689    4254 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1122 20:47:33.787755    4254 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1122 20:47:33.787868    4254 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1122 20:47:33.787958    4254 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1122 20:47:33.788013    4254 kubeadm.go:322] [certs] Using the existing "sa" key
	I1122 20:47:33.788098    4254 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 20:47:33.829509    4254 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 20:47:34.023707    4254 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 20:47:34.226646    4254 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 20:47:34.346798    4254 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 20:47:34.347412    4254 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 20:47:34.369005    4254 out.go:204]   - Booting up control plane ...
	I1122 20:47:34.369172    4254 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 20:47:34.369274    4254 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 20:47:34.369368    4254 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 20:47:34.369466    4254 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 20:47:34.369665    4254 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1122 20:48:14.355986    4254 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1122 20:48:14.356831    4254 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1122 20:48:14.357062    4254 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1122 20:48:19.358061    4254 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1122 20:48:19.358218    4254 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1122 20:48:29.359256    4254 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1122 20:48:29.359477    4254 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1122 20:48:49.359882    4254 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1122 20:48:49.360029    4254 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1122 20:49:29.361182    4254 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1122 20:49:29.361397    4254 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1122 20:49:29.361412    4254 kubeadm.go:322] 
	I1122 20:49:29.361456    4254 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I1122 20:49:29.361497    4254 kubeadm.go:322] 		timed out waiting for the condition
	I1122 20:49:29.361505    4254 kubeadm.go:322] 
	I1122 20:49:29.361536    4254 kubeadm.go:322] 	This error is likely caused by:
	I1122 20:49:29.361598    4254 kubeadm.go:322] 		- The kubelet is not running
	I1122 20:49:29.361768    4254 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1122 20:49:29.361779    4254 kubeadm.go:322] 
	I1122 20:49:29.361895    4254 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1122 20:49:29.361935    4254 kubeadm.go:322] 		- 'systemctl status kubelet'
	I1122 20:49:29.361968    4254 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I1122 20:49:29.361973    4254 kubeadm.go:322] 
	I1122 20:49:29.362083    4254 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1122 20:49:29.362172    4254 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1122 20:49:29.362180    4254 kubeadm.go:322] 
	I1122 20:49:29.362273    4254 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1122 20:49:29.362346    4254 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I1122 20:49:29.362428    4254 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I1122 20:49:29.362459    4254 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I1122 20:49:29.362465    4254 kubeadm.go:322] 
	I1122 20:49:29.364020    4254 kubeadm.go:322] W1123 04:47:33.294082    4758 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1122 20:49:29.364170    4254 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1122 20:49:29.364231    4254 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1122 20:49:29.364346    4254 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1122 20:49:29.364425    4254 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 20:49:29.364526    4254 kubeadm.go:322] W1123 04:47:34.351140    4758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1122 20:49:29.364651    4254 kubeadm.go:322] W1123 04:47:34.352483    4758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1122 20:49:29.364721    4254 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1122 20:49:29.364773    4254 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1122 20:49:29.364816    4254 kubeadm.go:406] StartCluster complete in 3m54.033514809s
	I1122 20:49:29.364897    4254 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1122 20:49:29.382992    4254 logs.go:284] 0 containers: []
	W1122 20:49:29.383005    4254 logs.go:286] No container was found matching "kube-apiserver"
	I1122 20:49:29.383068    4254 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1122 20:49:29.401255    4254 logs.go:284] 0 containers: []
	W1122 20:49:29.401269    4254 logs.go:286] No container was found matching "etcd"
	I1122 20:49:29.401345    4254 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1122 20:49:29.418712    4254 logs.go:284] 0 containers: []
	W1122 20:49:29.418726    4254 logs.go:286] No container was found matching "coredns"
	I1122 20:49:29.418798    4254 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1122 20:49:29.438464    4254 logs.go:284] 0 containers: []
	W1122 20:49:29.438478    4254 logs.go:286] No container was found matching "kube-scheduler"
	I1122 20:49:29.438547    4254 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1122 20:49:29.456556    4254 logs.go:284] 0 containers: []
	W1122 20:49:29.456577    4254 logs.go:286] No container was found matching "kube-proxy"
	I1122 20:49:29.456668    4254 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1122 20:49:29.476168    4254 logs.go:284] 0 containers: []
	W1122 20:49:29.476183    4254 logs.go:286] No container was found matching "kube-controller-manager"
	I1122 20:49:29.476247    4254 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1122 20:49:29.494040    4254 logs.go:284] 0 containers: []
	W1122 20:49:29.494055    4254 logs.go:286] No container was found matching "kindnet"
	I1122 20:49:29.494063    4254 logs.go:123] Gathering logs for describe nodes ...
	I1122 20:49:29.494075    4254 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 20:49:29.547870    4254 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 20:49:29.547889    4254 logs.go:123] Gathering logs for Docker ...
	I1122 20:49:29.547898    4254 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1122 20:49:29.563359    4254 logs.go:123] Gathering logs for container status ...
	I1122 20:49:29.563373    4254 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 20:49:29.618913    4254 logs.go:123] Gathering logs for kubelet ...
	I1122 20:49:29.618927    4254 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 20:49:29.655344    4254 logs.go:123] Gathering logs for dmesg ...
	I1122 20:49:29.655360    4254 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1122 20:49:29.669308    4254 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1123 04:47:33.294082    4758 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1123 04:47:34.351140    4758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1123 04:47:34.352483    4758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1122 20:49:29.669331    4254 out.go:239] * 
	* 
	W1122 20:49:29.669375    4254 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1123 04:47:33.294082    4758 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1123 04:47:34.351140    4758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1123 04:47:34.352483    4758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1123 04:47:33.294082    4758 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1123 04:47:34.351140    4758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1123 04:47:34.352483    4758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1122 20:49:29.669390    4254 out.go:239] * 
	* 
	W1122 20:49:29.670029    4254 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 20:49:29.734767    4254 out.go:177] 
	W1122 20:49:29.776886    4254 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1123 04:47:33.294082    4758 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1123 04:47:34.351140    4758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1123 04:47:34.352483    4758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1123 04:47:33.294082    4758 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1123 04:47:34.351140    4758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1123 04:47:34.352483    4758 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1122 20:49:29.776965    4254 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1122 20:49:29.777006    4254 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1122 20:49:29.798742    4254 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-009000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (258.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (112.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-009000 addons enable ingress --alsologtostderr -v=5
E1122 20:50:41.783827    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-009000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m51.573364552s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 20:49:29.984630    4470 out.go:296] Setting OutFile to fd 1 ...
	I1122 20:49:29.984966    4470 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:49:29.984973    4470 out.go:309] Setting ErrFile to fd 2...
	I1122 20:49:29.984977    4470 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:49:29.985153    4470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 20:49:29.985540    4470 mustload.go:65] Loading cluster: ingress-addon-legacy-009000
	I1122 20:49:29.985824    4470 config.go:182] Loaded profile config "ingress-addon-legacy-009000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1122 20:49:29.985843    4470 addons.go:594] checking whether the cluster is paused
	I1122 20:49:29.985923    4470 config.go:182] Loaded profile config "ingress-addon-legacy-009000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1122 20:49:29.985939    4470 host.go:66] Checking if "ingress-addon-legacy-009000" exists ...
	I1122 20:49:29.986365    4470 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-009000 --format={{.State.Status}}
	I1122 20:49:30.041166    4470 ssh_runner.go:195] Run: systemctl --version
	I1122 20:49:30.041260    4470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:49:30.091290    4470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50457 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/ingress-addon-legacy-009000/id_rsa Username:docker}
	I1122 20:49:30.176766    4470 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1122 20:49:30.215650    4470 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1122 20:49:30.236644    4470 config.go:182] Loaded profile config "ingress-addon-legacy-009000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1122 20:49:30.236672    4470 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-009000"
	I1122 20:49:30.236684    4470 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-009000"
	I1122 20:49:30.236756    4470 host.go:66] Checking if "ingress-addon-legacy-009000" exists ...
	I1122 20:49:30.237324    4470 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-009000 --format={{.State.Status}}
	I1122 20:49:30.310568    4470 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1122 20:49:30.332521    4470 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1122 20:49:30.375421    4470 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1122 20:49:30.396615    4470 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1122 20:49:30.418596    4470 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1122 20:49:30.418630    4470 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1122 20:49:30.418750    4470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:49:30.474158    4470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50457 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/ingress-addon-legacy-009000/id_rsa Username:docker}
	I1122 20:49:30.572644    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:49:30.622332    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:30.622361    4470 retry.go:31] will retry after 306.799348ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:30.930671    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:49:30.981114    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:30.981136    4470 retry.go:31] will retry after 436.135417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:31.417680    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:49:31.486779    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:31.486806    4470 retry.go:31] will retry after 343.478795ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:31.832561    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:49:31.885031    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:31.885053    4470 retry.go:31] will retry after 503.751709ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:32.388925    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:49:32.444268    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:32.444289    4470 retry.go:31] will retry after 1.731928145s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:34.178474    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:49:34.240825    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:34.240851    4470 retry.go:31] will retry after 2.137765343s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:36.378795    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:49:36.429128    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:36.429150    4470 retry.go:31] will retry after 3.847133459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:40.278452    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:49:40.334966    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:40.334985    4470 retry.go:31] will retry after 5.664924925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:46.002076    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:49:46.058163    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:46.058187    4470 retry.go:31] will retry after 9.580374981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:55.638519    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:49:55.690354    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:49:55.690376    4470 retry.go:31] will retry after 8.335787985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:50:04.028279    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:50:04.091055    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:50:04.091071    4470 retry.go:31] will retry after 8.002478733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:50:12.093895    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:50:12.155157    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:50:12.155177    4470 retry.go:31] will retry after 24.294871368s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:50:36.449944    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:50:36.500138    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:50:36.500161    4470 retry.go:31] will retry after 44.806838835s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:21.308270    4470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1122 20:51:21.358363    4470 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:21.358389    4470 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-009000"
	I1122 20:51:21.380180    4470 out.go:177] * Verifying ingress addon...
	I1122 20:51:21.424154    4470 out.go:177] 
	W1122 20:51:21.446239    4470 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-009000" does not exist: client config: context "ingress-addon-legacy-009000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-009000" does not exist: client config: context "ingress-addon-legacy-009000" does not exist]
	W1122 20:51:21.446294    4470 out.go:239] * 
	* 
	W1122 20:51:21.449821    4470 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 20:51:21.470860    4470 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-009000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-009000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff",
	        "Created": "2023-11-23T04:45:20.604863929Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51835,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-23T04:45:20.819714949Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7b13b8068c138827ed6edd3fefc1858e39f15798035b600ada929f3fdbe10859",
	        "ResolvConfPath": "/var/lib/docker/containers/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff/hosts",
	        "LogPath": "/var/lib/docker/containers/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff-json.log",
	        "Name": "/ingress-addon-legacy-009000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-009000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-009000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0fca2b70369fb98df4cea23c772f5ca4a108f16ab26fcbd55f869942ac70850-init/diff:/var/lib/docker/overlay2/606115cd9540a7d654370107a965f1b1790179a3ca089bbe01bbb38c883936cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0fca2b70369fb98df4cea23c772f5ca4a108f16ab26fcbd55f869942ac70850/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0fca2b70369fb98df4cea23c772f5ca4a108f16ab26fcbd55f869942ac70850/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0fca2b70369fb98df4cea23c772f5ca4a108f16ab26fcbd55f869942ac70850/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-009000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-009000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-009000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-009000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-009000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79af09fb288b1704384ad429e1c3574ad7643b1ca6ba29fce9358df3b0598d9a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50457"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50458"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50459"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50456"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/79af09fb288b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-009000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a42fa2bb6bae",
	                        "ingress-addon-legacy-009000"
	                    ],
	                    "NetworkID": "89455ca501981772ca7f3ce9685fd4a6ec85bcb40d8058a0905a385cbb5b1421",
	                    "EndpointID": "4efa92a726e26b422c2065b80797db878b494252dfeb088e1786f02a0f647349",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-009000 -n ingress-addon-legacy-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-009000 -n ingress-addon-legacy-009000: exit status 6 (392.101957ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 20:51:21.929009    4514 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-009000" does not appear in /Users/jenkins/minikube-integration/17659-904/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-009000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (112.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (95.04s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-009000 addons enable ingress-dns --alsologtostderr -v=5
E1122 20:52:30.962664    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-009000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m34.608411722s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 20:51:22.000250    4524 out.go:296] Setting OutFile to fd 1 ...
	I1122 20:51:22.000647    4524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:51:22.000654    4524 out.go:309] Setting ErrFile to fd 2...
	I1122 20:51:22.000658    4524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:51:22.000837    4524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 20:51:22.001216    4524 mustload.go:65] Loading cluster: ingress-addon-legacy-009000
	I1122 20:51:22.001497    4524 config.go:182] Loaded profile config "ingress-addon-legacy-009000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1122 20:51:22.001514    4524 addons.go:594] checking whether the cluster is paused
	I1122 20:51:22.001596    4524 config.go:182] Loaded profile config "ingress-addon-legacy-009000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1122 20:51:22.001612    4524 host.go:66] Checking if "ingress-addon-legacy-009000" exists ...
	I1122 20:51:22.002006    4524 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-009000 --format={{.State.Status}}
	I1122 20:51:22.054747    4524 ssh_runner.go:195] Run: systemctl --version
	I1122 20:51:22.054847    4524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:51:22.132107    4524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50457 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/ingress-addon-legacy-009000/id_rsa Username:docker}
	I1122 20:51:22.236751    4524 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1122 20:51:22.333192    4524 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1122 20:51:22.362476    4524 config.go:182] Loaded profile config "ingress-addon-legacy-009000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1122 20:51:22.362502    4524 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-009000"
	I1122 20:51:22.362513    4524 addons.go:231] Setting addon ingress-dns=true in "ingress-addon-legacy-009000"
	I1122 20:51:22.362563    4524 host.go:66] Checking if "ingress-addon-legacy-009000" exists ...
	I1122 20:51:22.363194    4524 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-009000 --format={{.State.Status}}
	I1122 20:51:22.465308    4524 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1122 20:51:22.486218    4524 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1122 20:51:22.507378    4524 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1122 20:51:22.507409    4524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1122 20:51:22.507528    4524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-009000
	I1122 20:51:22.566112    4524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50457 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/ingress-addon-legacy-009000/id_rsa Username:docker}
	I1122 20:51:22.673063    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:51:22.736640    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:22.736678    4524 retry.go:31] will retry after 365.829257ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:23.104432    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:51:23.168757    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:23.168781    4524 retry.go:31] will retry after 423.840908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:23.592877    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:51:23.653681    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:23.653705    4524 retry.go:31] will retry after 785.037741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:24.438877    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:51:24.582589    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:24.582606    4524 retry.go:31] will retry after 502.052494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:25.084769    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:51:25.133663    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:25.133680    4524 retry.go:31] will retry after 1.570339636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:26.705616    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:51:26.755068    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:26.755091    4524 retry.go:31] will retry after 2.621581411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:29.378920    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:51:29.429987    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:29.430005    4524 retry.go:31] will retry after 2.472671958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:31.904201    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:51:31.968524    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:31.968541    4524 retry.go:31] will retry after 4.110394666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:36.079214    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:51:36.128181    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:36.128198    4524 retry.go:31] will retry after 9.491954376s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:45.621239    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:51:45.677795    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:45.677817    4524 retry.go:31] will retry after 8.240971954s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:53.918787    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:51:54.092404    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:51:54.092426    4524 retry.go:31] will retry after 8.326739775s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:52:02.419301    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:52:02.499143    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:52:02.499162    4524 retry.go:31] will retry after 18.205059555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:52:20.704053    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:52:20.767584    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:52:20.767602    4524 retry.go:31] will retry after 35.636429154s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:52:56.404930    4524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1122 20:52:56.454659    4524 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1122 20:52:56.476463    4524 out.go:177] 
	W1122 20:52:56.498299    4524 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1122 20:52:56.498336    4524 out.go:239] * 
	* 
	W1122 20:52:56.501900    4524 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 20:52:56.523365    4524 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-009000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-009000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff",
	        "Created": "2023-11-23T04:45:20.604863929Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51835,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-23T04:45:20.819714949Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7b13b8068c138827ed6edd3fefc1858e39f15798035b600ada929f3fdbe10859",
	        "ResolvConfPath": "/var/lib/docker/containers/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff/hosts",
	        "LogPath": "/var/lib/docker/containers/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff-json.log",
	        "Name": "/ingress-addon-legacy-009000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-009000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-009000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0fca2b70369fb98df4cea23c772f5ca4a108f16ab26fcbd55f869942ac70850-init/diff:/var/lib/docker/overlay2/606115cd9540a7d654370107a965f1b1790179a3ca089bbe01bbb38c883936cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0fca2b70369fb98df4cea23c772f5ca4a108f16ab26fcbd55f869942ac70850/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0fca2b70369fb98df4cea23c772f5ca4a108f16ab26fcbd55f869942ac70850/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0fca2b70369fb98df4cea23c772f5ca4a108f16ab26fcbd55f869942ac70850/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-009000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-009000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-009000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-009000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-009000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79af09fb288b1704384ad429e1c3574ad7643b1ca6ba29fce9358df3b0598d9a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50457"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50458"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50459"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50456"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/79af09fb288b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-009000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a42fa2bb6bae",
	                        "ingress-addon-legacy-009000"
	                    ],
	                    "NetworkID": "89455ca501981772ca7f3ce9685fd4a6ec85bcb40d8058a0905a385cbb5b1421",
	                    "EndpointID": "4efa92a726e26b422c2065b80797db878b494252dfeb088e1786f02a0f647349",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-009000 -n ingress-addon-legacy-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-009000 -n ingress-addon-legacy-009000: exit status 6 (375.342009ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 20:52:56.964538    4572 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-009000" does not appear in /Users/jenkins/minikube-integration/17659-904/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-009000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (95.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:200: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-009000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-009000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff",
	        "Created": "2023-11-23T04:45:20.604863929Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51835,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-23T04:45:20.819714949Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7b13b8068c138827ed6edd3fefc1858e39f15798035b600ada929f3fdbe10859",
	        "ResolvConfPath": "/var/lib/docker/containers/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff/hosts",
	        "LogPath": "/var/lib/docker/containers/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff/a42fa2bb6baea5af0c2000680f2aa19144bf4de0e8cb586418a043f43d7a78ff-json.log",
	        "Name": "/ingress-addon-legacy-009000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-009000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-009000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0fca2b70369fb98df4cea23c772f5ca4a108f16ab26fcbd55f869942ac70850-init/diff:/var/lib/docker/overlay2/606115cd9540a7d654370107a965f1b1790179a3ca089bbe01bbb38c883936cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0fca2b70369fb98df4cea23c772f5ca4a108f16ab26fcbd55f869942ac70850/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0fca2b70369fb98df4cea23c772f5ca4a108f16ab26fcbd55f869942ac70850/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0fca2b70369fb98df4cea23c772f5ca4a108f16ab26fcbd55f869942ac70850/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-009000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-009000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-009000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-009000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-009000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79af09fb288b1704384ad429e1c3574ad7643b1ca6ba29fce9358df3b0598d9a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50457"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50458"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50459"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50456"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/79af09fb288b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-009000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a42fa2bb6bae",
	                        "ingress-addon-legacy-009000"
	                    ],
	                    "NetworkID": "89455ca501981772ca7f3ce9685fd4a6ec85bcb40d8058a0905a385cbb5b1421",
	                    "EndpointID": "4efa92a726e26b422c2065b80797db878b494252dfeb088e1786f02a0f647349",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-009000 -n ingress-addon-legacy-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-009000 -n ingress-addon-legacy-009000: exit status 6 (367.877183ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 20:52:57.389808    4584 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-009000" does not appear in /Users/jenkins/minikube-integration/17659-904/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-009000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (872.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-326000 ssh -- ls /minikube-host
E1122 20:57:30.954982    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:57:57.923915    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 20:58:54.003086    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:02:31.073159    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:02:58.041504    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 21:04:21.094475    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 21:07:31.072772    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:07:58.040805    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
mount_start_test.go:114: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-2-326000 ssh -- ls /minikube-host: signal: killed (14m31.998876879s)
mount_start_test.go:116: mount failed: "out/minikube-darwin-amd64 -p mount-start-2-326000 ssh -- ls /minikube-host" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/VerifyMountPostStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-2-326000
helpers_test.go:235: (dbg) docker inspect mount-start-2-326000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4fb0493f9c924cb60b98452e41edcc606c26cc3208e9abaa535a6c809007a0bf",
	        "Created": "2023-11-23T04:56:50.133502229Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 100568,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-23T04:57:01.414246797Z",
	            "FinishedAt": "2023-11-23T04:56:59.067222902Z"
	        },
	        "Image": "sha256:7b13b8068c138827ed6edd3fefc1858e39f15798035b600ada929f3fdbe10859",
	        "ResolvConfPath": "/var/lib/docker/containers/4fb0493f9c924cb60b98452e41edcc606c26cc3208e9abaa535a6c809007a0bf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4fb0493f9c924cb60b98452e41edcc606c26cc3208e9abaa535a6c809007a0bf/hostname",
	        "HostsPath": "/var/lib/docker/containers/4fb0493f9c924cb60b98452e41edcc606c26cc3208e9abaa535a6c809007a0bf/hosts",
	        "LogPath": "/var/lib/docker/containers/4fb0493f9c924cb60b98452e41edcc606c26cc3208e9abaa535a6c809007a0bf/4fb0493f9c924cb60b98452e41edcc606c26cc3208e9abaa535a6c809007a0bf-json.log",
	        "Name": "/mount-start-2-326000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "mount-start-2-326000:/var",
	                "/host_mnt/Users:/minikube-host"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "mount-start-2-326000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a237362671e0506317a315af5436892c654494cbd57141c1a9d905ca109186d7-init/diff:/var/lib/docker/overlay2/606115cd9540a7d654370107a965f1b1790179a3ca089bbe01bbb38c883936cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a237362671e0506317a315af5436892c654494cbd57141c1a9d905ca109186d7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a237362671e0506317a315af5436892c654494cbd57141c1a9d905ca109186d7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a237362671e0506317a315af5436892c654494cbd57141c1a9d905ca109186d7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "mount-start-2-326000",
	                "Source": "/var/lib/docker/volumes/mount-start-2-326000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/host_mnt/Users",
	                "Destination": "/minikube-host",
	                "Mode": "",
	                "RW": true,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "mount-start-2-326000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "mount-start-2-326000",
	                "name.minikube.sigs.k8s.io": "mount-start-2-326000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7153a55490a6734330ecc366a3370b598f6cc078b7357ce4373f4c2681ff0f0e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50763"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50764"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50765"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50766"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50767"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7153a55490a6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "mount-start-2-326000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4fb0493f9c92",
	                        "mount-start-2-326000"
	                    ],
	                    "NetworkID": "6eb8ad884cefde42aa861496a92d1773bc57b015032e80c10bef2e7ad82a37af",
	                    "EndpointID": "cc7fdbc3a1ce8d7a9466cc4adc9750bc5b9073672a9262678a082f70d7add68b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-326000 -n mount-start-2-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-2-326000 -n mount-start-2-326000: exit status 6 (371.842243ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:11:41.139015    6497 status.go:415] kubeconfig endpoint: extract IP: "mount-start-2-326000" does not appear in /Users/jenkins/minikube-integration/17659-904/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-326000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountPostStop (872.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (751.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-690000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1122 21:12:58.041772    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 21:15:34.121171    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:17:31.070523    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:17:58.040069    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 21:21:01.094929    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 21:22:31.069662    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:22:58.040076    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-690000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : exit status 52 (12m31.711070368s)

                                                
                                                
-- stdout --
	* [multinode-690000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node multinode-690000 in cluster multinode-690000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-690000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 21:12:50.171992    6606 out.go:296] Setting OutFile to fd 1 ...
	I1122 21:12:50.172202    6606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:12:50.172208    6606 out.go:309] Setting ErrFile to fd 2...
	I1122 21:12:50.172212    6606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:12:50.172398    6606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 21:12:50.173859    6606 out.go:303] Setting JSON to false
	I1122 21:12:50.196058    6606 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2544,"bootTime":1700713826,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1122 21:12:50.196159    6606 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1122 21:12:50.217696    6606 out.go:177] * [multinode-690000] minikube v1.32.0 on Darwin 14.1.1
	I1122 21:12:50.260916    6606 out.go:177]   - MINIKUBE_LOCATION=17659
	I1122 21:12:50.261015    6606 notify.go:220] Checking for updates...
	I1122 21:12:50.304631    6606 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	I1122 21:12:50.326469    6606 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1122 21:12:50.347610    6606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 21:12:50.368700    6606 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	I1122 21:12:50.390586    6606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 21:12:50.412937    6606 driver.go:378] Setting default libvirt URI to qemu:///system
	I1122 21:12:50.469074    6606 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.2 (129061)
	I1122 21:12:50.469221    6606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 21:12:50.568327    6606 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:68 SystemTime:2023-11-23 05:12:50.558250114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 21:12:50.610465    6606 out.go:177] * Using the docker driver based on user configuration
	I1122 21:12:50.632285    6606 start.go:298] selected driver: docker
	I1122 21:12:50.632325    6606 start.go:902] validating driver "docker" against <nil>
	I1122 21:12:50.632344    6606 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 21:12:50.636729    6606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 21:12:50.736780    6606 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:68 SystemTime:2023-11-23 05:12:50.727108566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 21:12:50.736959    6606 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1122 21:12:50.737168    6606 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 21:12:50.759378    6606 out.go:177] * Using Docker Desktop driver with root privileges
	I1122 21:12:50.781960    6606 cni.go:84] Creating CNI manager for ""
	I1122 21:12:50.781979    6606 cni.go:136] 0 nodes found, recommending kindnet
	I1122 21:12:50.781988    6606 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 21:12:50.782000    6606 start_flags.go:323] config:
	{Name:multinode-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-690000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 21:12:50.823882    6606 out.go:177] * Starting control plane node multinode-690000 in cluster multinode-690000
	I1122 21:12:50.844990    6606 cache.go:121] Beginning downloading kic base image for docker with docker
	I1122 21:12:50.865728    6606 out.go:177] * Pulling base image ...
	I1122 21:12:50.908137    6606 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 21:12:50.908196    6606 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1122 21:12:50.908241    6606 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1122 21:12:50.908264    6606 cache.go:56] Caching tarball of preloaded images
	I1122 21:12:50.908598    6606 preload.go:174] Found /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 21:12:50.908627    6606 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1122 21:12:50.910561    6606 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/multinode-690000/config.json ...
	I1122 21:12:50.910671    6606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/multinode-690000/config.json: {Name:mke4da8a4c115ee9c967201a1c4b9655381737b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 21:12:50.971930    6606 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1122 21:12:50.971946    6606 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1122 21:12:50.971964    6606 cache.go:194] Successfully downloaded all kic artifacts
	I1122 21:12:50.971998    6606 start.go:365] acquiring machines lock for multinode-690000: {Name:mkea132b81449eba447ff3987e05bb42c1c9416e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 21:12:50.972143    6606 start.go:369] acquired machines lock for "multinode-690000" in 133.159µs
	I1122 21:12:50.972168    6606 start.go:93] Provisioning new machine with config: &{Name:multinode-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-690000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1122 21:12:50.972239    6606 start.go:125] createHost starting for "" (driver="docker")
	I1122 21:12:50.993978    6606 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1122 21:12:50.994355    6606 start.go:159] libmachine.API.Create for "multinode-690000" (driver="docker")
	I1122 21:12:50.994400    6606 client.go:168] LocalClient.Create starting
	I1122 21:12:50.994590    6606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 21:12:50.994681    6606 main.go:141] libmachine: Decoding PEM data...
	I1122 21:12:50.994715    6606 main.go:141] libmachine: Parsing certificate...
	I1122 21:12:50.994829    6606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 21:12:50.994895    6606 main.go:141] libmachine: Decoding PEM data...
	I1122 21:12:50.994922    6606 main.go:141] libmachine: Parsing certificate...
	I1122 21:12:51.015383    6606 cli_runner.go:164] Run: docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 21:12:51.069762    6606 cli_runner.go:211] docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 21:12:51.069857    6606 network_create.go:281] running [docker network inspect multinode-690000] to gather additional debugging logs...
	I1122 21:12:51.069877    6606 cli_runner.go:164] Run: docker network inspect multinode-690000
	W1122 21:12:51.121463    6606 cli_runner.go:211] docker network inspect multinode-690000 returned with exit code 1
	I1122 21:12:51.121495    6606 network_create.go:284] error running [docker network inspect multinode-690000]: docker network inspect multinode-690000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-690000 not found
	I1122 21:12:51.121507    6606 network_create.go:286] output of [docker network inspect multinode-690000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-690000 not found
	
	** /stderr **
	I1122 21:12:51.121610    6606 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 21:12:51.174281    6606 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 21:12:51.174677    6606 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00226c7a0}
	I1122 21:12:51.174698    6606 network_create.go:124] attempt to create docker network multinode-690000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1122 21:12:51.174760    6606 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-690000 multinode-690000
	I1122 21:12:51.261465    6606 network_create.go:108] docker network multinode-690000 192.168.58.0/24 created
	I1122 21:12:51.261505    6606 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-690000" container
	I1122 21:12:51.261614    6606 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 21:12:51.312554    6606 cli_runner.go:164] Run: docker volume create multinode-690000 --label name.minikube.sigs.k8s.io=multinode-690000 --label created_by.minikube.sigs.k8s.io=true
	I1122 21:12:51.363734    6606 oci.go:103] Successfully created a docker volume multinode-690000
	I1122 21:12:51.363847    6606 cli_runner.go:164] Run: docker run --rm --name multinode-690000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-690000 --entrypoint /usr/bin/test -v multinode-690000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 21:12:51.730586    6606 oci.go:107] Successfully prepared a docker volume multinode-690000
	I1122 21:12:51.730618    6606 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 21:12:51.730630    6606 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 21:12:51.730734    6606 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-690000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 21:18:50.994464    6606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 21:18:50.994598    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:18:51.048266    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:18:51.048371    6606 retry.go:31] will retry after 347.84507ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:51.396475    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:18:51.449632    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:18:51.449742    6606 retry.go:31] will retry after 533.634889ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:51.985782    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:18:52.038872    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:18:52.038987    6606 retry.go:31] will retry after 825.642329ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:52.867045    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:18:52.921168    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:18:52.921289    6606 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:18:52.921308    6606 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:52.921365    6606 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 21:18:52.921429    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:18:52.973077    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:18:52.973167    6606 retry.go:31] will retry after 322.054656ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:53.297607    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:18:53.353649    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:18:53.353742    6606 retry.go:31] will retry after 249.153711ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:53.605240    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:18:53.657024    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:18:53.657114    6606 retry.go:31] will retry after 709.225374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:54.367459    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:18:54.420318    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:18:54.420417    6606 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:18:54.420432    6606 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:54.420450    6606 start.go:128] duration metric: createHost completed in 6m3.44864935s
	I1122 21:18:54.420456    6606 start.go:83] releasing machines lock for "multinode-690000", held for 6m3.448757229s
	W1122 21:18:54.420470    6606 start.go:691] error starting host: creating host: create host timed out in 360.000000 seconds
	I1122 21:18:54.420898    6606 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:18:54.472344    6606 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:18:54.472392    6606 delete.go:82] Unable to get host status for multinode-690000, assuming it has already been deleted: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	W1122 21:18:54.472472    6606 out.go:239] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1122 21:18:54.472483    6606 start.go:706] Will try again in 5 seconds ...
	I1122 21:18:59.473605    6606 start.go:365] acquiring machines lock for multinode-690000: {Name:mkea132b81449eba447ff3987e05bb42c1c9416e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 21:18:59.473787    6606 start.go:369] acquired machines lock for "multinode-690000" in 137.44µs
	I1122 21:18:59.473825    6606 start.go:96] Skipping create...Using existing machine configuration
	I1122 21:18:59.473842    6606 fix.go:54] fixHost starting: 
	I1122 21:18:59.474301    6606 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:18:59.526950    6606 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:18:59.527011    6606 fix.go:102] recreateIfNeeded on multinode-690000: state= err=unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:59.527031    6606 fix.go:107] machineExists: false. err=machine does not exist
	I1122 21:18:59.548551    6606 out.go:177] * docker "multinode-690000" container is missing, will recreate.
	I1122 21:18:59.570635    6606 delete.go:124] DEMOLISHING multinode-690000 ...
	I1122 21:18:59.570846    6606 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:18:59.622212    6606 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	W1122 21:18:59.622254    6606 stop.go:75] unable to get state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:59.622276    6606 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:59.622634    6606 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:18:59.671284    6606 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:18:59.671335    6606 delete.go:82] Unable to get host status for multinode-690000, assuming it has already been deleted: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:59.671426    6606 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-690000
	W1122 21:18:59.720366    6606 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-690000 returned with exit code 1
	I1122 21:18:59.720403    6606 kic.go:371] could not find the container multinode-690000 to remove it. will try anyways
	I1122 21:18:59.720480    6606 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:18:59.770414    6606 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	W1122 21:18:59.770456    6606 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:18:59.770535    6606 cli_runner.go:164] Run: docker exec --privileged -t multinode-690000 /bin/bash -c "sudo init 0"
	W1122 21:18:59.820423    6606 cli_runner.go:211] docker exec --privileged -t multinode-690000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1122 21:18:59.820454    6606 oci.go:650] error shutdown multinode-690000: docker exec --privileged -t multinode-690000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:00.822191    6606 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:19:00.873338    6606 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:19:00.873382    6606 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:00.873394    6606 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:19:00.873419    6606 retry.go:31] will retry after 696.771044ms: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:01.572600    6606 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:19:01.626381    6606 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:19:01.626425    6606 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:01.626437    6606 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:19:01.626460    6606 retry.go:31] will retry after 787.580198ms: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:02.414523    6606 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:19:02.469070    6606 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:19:02.469123    6606 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:02.469135    6606 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:19:02.469159    6606 retry.go:31] will retry after 1.489732157s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:03.961260    6606 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:19:04.012027    6606 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:19:04.012069    6606 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:04.012084    6606 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:19:04.012109    6606 retry.go:31] will retry after 879.4512ms: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:04.892754    6606 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:19:04.947820    6606 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:19:04.947866    6606 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:04.947877    6606 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:19:04.947902    6606 retry.go:31] will retry after 3.739653278s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:08.687855    6606 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:19:08.741693    6606 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:19:08.741739    6606 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:08.741754    6606 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:19:08.741776    6606 retry.go:31] will retry after 5.679652371s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:14.422523    6606 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:19:14.477632    6606 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:19:14.477686    6606 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:19:14.477699    6606 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:19:14.477728    6606 oci.go:88] couldn't shut down multinode-690000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	 
	I1122 21:19:14.477811    6606 cli_runner.go:164] Run: docker rm -f -v multinode-690000
	I1122 21:19:14.528519    6606 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-690000
	W1122 21:19:14.578241    6606 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-690000 returned with exit code 1
	I1122 21:19:14.578356    6606 cli_runner.go:164] Run: docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 21:19:14.627945    6606 cli_runner.go:164] Run: docker network rm multinode-690000
	I1122 21:19:14.731507    6606 fix.go:114] Sleeping 1 second for extra luck!
	I1122 21:19:15.733697    6606 start.go:125] createHost starting for "" (driver="docker")
	I1122 21:19:15.756636    6606 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1122 21:19:15.756739    6606 start.go:159] libmachine.API.Create for "multinode-690000" (driver="docker")
	I1122 21:19:15.756760    6606 client.go:168] LocalClient.Create starting
	I1122 21:19:15.756917    6606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 21:19:15.756978    6606 main.go:141] libmachine: Decoding PEM data...
	I1122 21:19:15.756997    6606 main.go:141] libmachine: Parsing certificate...
	I1122 21:19:15.757044    6606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 21:19:15.757079    6606 main.go:141] libmachine: Decoding PEM data...
	I1122 21:19:15.757093    6606 main.go:141] libmachine: Parsing certificate...
	I1122 21:19:15.777825    6606 cli_runner.go:164] Run: docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 21:19:15.828935    6606 cli_runner.go:211] docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 21:19:15.829023    6606 network_create.go:281] running [docker network inspect multinode-690000] to gather additional debugging logs...
	I1122 21:19:15.829041    6606 cli_runner.go:164] Run: docker network inspect multinode-690000
	W1122 21:19:15.879659    6606 cli_runner.go:211] docker network inspect multinode-690000 returned with exit code 1
	I1122 21:19:15.879692    6606 network_create.go:284] error running [docker network inspect multinode-690000]: docker network inspect multinode-690000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-690000 not found
	I1122 21:19:15.879703    6606 network_create.go:286] output of [docker network inspect multinode-690000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-690000 not found
	
	** /stderr **
	I1122 21:19:15.879849    6606 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 21:19:15.936988    6606 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 21:19:15.938353    6606 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 21:19:15.938837    6606 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00229e6d0}
	I1122 21:19:15.938871    6606 network_create.go:124] attempt to create docker network multinode-690000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1122 21:19:15.938995    6606 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-690000 multinode-690000
	W1122 21:19:15.994287    6606 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-690000 multinode-690000 returned with exit code 1
	W1122 21:19:15.994337    6606 network_create.go:149] failed to create docker network multinode-690000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-690000 multinode-690000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1122 21:19:15.994359    6606 network_create.go:116] failed to create docker network multinode-690000 192.168.67.0/24, will retry: subnet is taken
	I1122 21:19:15.995710    6606 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 21:19:15.996181    6606 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0022ff870}
	I1122 21:19:15.996200    6606 network_create.go:124] attempt to create docker network multinode-690000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1122 21:19:15.996269    6606 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-690000 multinode-690000
	I1122 21:19:16.082215    6606 network_create.go:108] docker network multinode-690000 192.168.76.0/24 created
	I1122 21:19:16.082254    6606 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-690000" container
	I1122 21:19:16.082368    6606 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 21:19:16.132577    6606 cli_runner.go:164] Run: docker volume create multinode-690000 --label name.minikube.sigs.k8s.io=multinode-690000 --label created_by.minikube.sigs.k8s.io=true
	I1122 21:19:16.181846    6606 oci.go:103] Successfully created a docker volume multinode-690000
	I1122 21:19:16.181978    6606 cli_runner.go:164] Run: docker run --rm --name multinode-690000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-690000 --entrypoint /usr/bin/test -v multinode-690000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 21:19:16.460170    6606 oci.go:107] Successfully prepared a docker volume multinode-690000
	I1122 21:19:16.460199    6606 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 21:19:16.460211    6606 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 21:19:16.460311    6606 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-690000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 21:25:15.757090    6606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 21:25:15.757210    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:15.809721    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:15.809833    6606 retry.go:31] will retry after 364.391955ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:16.176744    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:16.231071    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:16.231184    6606 retry.go:31] will retry after 484.699803ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:16.716797    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:16.770534    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:16.770635    6606 retry.go:31] will retry after 440.962024ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:17.212489    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:17.266304    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:17.266402    6606 retry.go:31] will retry after 426.805691ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:17.695544    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:17.748670    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:25:17.748809    6606 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:25:17.748835    6606 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:17.748892    6606 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 21:25:17.748948    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:17.839510    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:17.839604    6606 retry.go:31] will retry after 241.084333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:18.081527    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:18.131733    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:18.131835    6606 retry.go:31] will retry after 459.899516ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:18.592584    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:18.645783    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:18.645881    6606 retry.go:31] will retry after 738.720213ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:19.387017    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:19.440602    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:25:19.440722    6606 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:25:19.440747    6606 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:19.440760    6606 start.go:128] duration metric: createHost completed in 6m3.707463713s
	I1122 21:25:19.440840    6606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 21:25:19.440892    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:19.490891    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:19.490990    6606 retry.go:31] will retry after 188.114568ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:19.681512    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:19.735465    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:19.735633    6606 retry.go:31] will retry after 236.024322ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:19.974079    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:20.029163    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:20.029250    6606 retry.go:31] will retry after 363.394919ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:20.394207    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:20.448205    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:25:20.448306    6606 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:25:20.448321    6606 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:20.448380    6606 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 21:25:20.448433    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:20.498319    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:20.498408    6606 retry.go:31] will retry after 279.232431ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:20.780055    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:20.835008    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:20.835101    6606 retry.go:31] will retry after 373.647711ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:21.211194    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:21.265934    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:25:21.266036    6606 retry.go:31] will retry after 335.679264ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:21.603103    6606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:25:21.658402    6606 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:25:21.658508    6606 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:25:21.658527    6606 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:25:21.658543    6606 fix.go:56] fixHost completed within 6m22.185179292s
	I1122 21:25:21.658549    6606 start.go:83] releasing machines lock for "multinode-690000", held for 6m22.18522493s
	W1122 21:25:21.658630    6606 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-690000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-690000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1122 21:25:21.701010    6606 out.go:177] 
	W1122 21:25:21.722075    6606 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1122 21:25:21.722106    6606 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1122 21:25:21.722130    6606 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1122 21:25:21.743061    6606 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-690000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-690000
helpers_test.go:235: (dbg) docker inspect multinode-690000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-690000",
	        "Id": "46526e5611db3f5fba8f5c1133ac3b9df5a3328f2905514c3c4c4ab6fa47ee94",
	        "Created": "2023-11-23T05:19:16.043493089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-690000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000: exit status 7 (106.193663ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:25:21.998632    6944 status.go:249] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-690000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (751.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (84.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (91.230334ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-690000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- rollout status deployment/busybox: exit status 1 (90.872894ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (91.537865ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (93.465147ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.534173ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.092541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.741167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.642579ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.911072ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (95.949782ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.852658ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (96.835502ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (89.945679ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- exec  -- nslookup kubernetes.io: exit status 1 (91.529083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- exec  -- nslookup kubernetes.default: exit status 1 (92.130659ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (91.423027ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-690000
helpers_test.go:235: (dbg) docker inspect multinode-690000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-690000",
	        "Id": "46526e5611db3f5fba8f5c1133ac3b9df5a3328f2905514c3c4c4ab6fa47ee94",
	        "Created": "2023-11-23T05:19:16.043493089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-690000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000: exit status 7 (107.010188ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:26:46.596081    7016 status.go:249] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-690000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (84.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-690000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (90.056717ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-690000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-690000
helpers_test.go:235: (dbg) docker inspect multinode-690000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-690000",
	        "Id": "46526e5611db3f5fba8f5c1133ac3b9df5a3328f2905514c3c4c4ab6fa47ee94",
	        "Created": "2023-11-23T05:19:16.043493089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-690000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000: exit status 7 (106.823005ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:26:46.847369    7025 status.go:249] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-690000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-690000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-690000 -v 3 --alsologtostderr: exit status 80 (197.544329ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 21:26:46.902245    7029 out.go:296] Setting OutFile to fd 1 ...
	I1122 21:26:46.902536    7029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:26:46.902543    7029 out.go:309] Setting ErrFile to fd 2...
	I1122 21:26:46.902548    7029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:26:46.902730    7029 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 21:26:46.903070    7029 mustload.go:65] Loading cluster: multinode-690000
	I1122 21:26:46.903359    7029 config.go:182] Loaded profile config "multinode-690000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 21:26:46.903741    7029 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:26:46.953000    7029 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:26:46.976462    7029 out.go:177] 
	W1122 21:26:46.997486    7029 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:26:46.997518    7029 out.go:239] * 
	* 
	W1122 21:26:47.001440    7029 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 21:26:47.022503    7029 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-amd64 node add -p multinode-690000 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-690000
helpers_test.go:235: (dbg) docker inspect multinode-690000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-690000",
	        "Id": "46526e5611db3f5fba8f5c1133ac3b9df5a3328f2905514c3c4c4ab6fa47ee94",
	        "Created": "2023-11-23T05:19:16.043493089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-690000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000: exit status 7 (106.147286ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:26:47.205670    7035 status.go:249] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-690000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/AddNode (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
multinode_test.go:155: expected profile "multinode-690000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[{\"Name\":\"mount-start-2-326000\",\"Status\":\"\",\"Config\":null,\"Active\":false}],\"valid\":[{\"Name\":\"multinode-690000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"multinode-690000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KV
MNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-690000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\
"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"
AutoPauseInterval\":60000000000,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/ProfileList]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-690000
helpers_test.go:235: (dbg) docker inspect multinode-690000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-690000",
	        "Id": "46526e5611db3f5fba8f5c1133ac3b9df5a3328f2905514c3c4c4ab6fa47ee94",
	        "Created": "2023-11-23T05:19:16.043493089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-690000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000: exit status 7 (106.895195ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:26:47.546876    7047 status.go:249] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-690000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-690000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-690000 status --output json --alsologtostderr: exit status 7 (105.896074ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-690000","Host":"Nonexistent","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 21:26:47.601638    7051 out.go:296] Setting OutFile to fd 1 ...
	I1122 21:26:47.601937    7051 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:26:47.601944    7051 out.go:309] Setting ErrFile to fd 2...
	I1122 21:26:47.601948    7051 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:26:47.602137    7051 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 21:26:47.602312    7051 out.go:303] Setting JSON to true
	I1122 21:26:47.602334    7051 mustload.go:65] Loading cluster: multinode-690000
	I1122 21:26:47.602363    7051 notify.go:220] Checking for updates...
	I1122 21:26:47.602624    7051 config.go:182] Loaded profile config "multinode-690000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 21:26:47.602637    7051 status.go:255] checking status of multinode-690000 ...
	I1122 21:26:47.603034    7051 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:26:47.652812    7051 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:26:47.652862    7051 status.go:330] multinode-690000 host status = "" (err=state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	)
	I1122 21:26:47.652880    7051 status.go:257] multinode-690000 status: &{Name:multinode-690000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1122 21:26:47.652899    7051 status.go:260] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	E1122 21:26:47.652906    7051 status.go:263] The "multinode-690000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-amd64 -p multinode-690000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/CopyFile]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-690000
helpers_test.go:235: (dbg) docker inspect multinode-690000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-690000",
	        "Id": "46526e5611db3f5fba8f5c1133ac3b9df5a3328f2905514c3c4c4ab6fa47ee94",
	        "Created": "2023-11-23T05:19:16.043493089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-690000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000: exit status 7 (106.229773ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:26:47.812638    7057 status.go:249] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-690000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/CopyFile (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-690000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-690000 node stop m03: exit status 85 (147.927456ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-690000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-690000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-690000 status: exit status 7 (107.146456ms)

                                                
                                                
-- stdout --
	multinode-690000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:26:48.068374    7063 status.go:260] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	E1122 21:26:48.068387    7063 status.go:263] The "multinode-690000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-690000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-690000 status --alsologtostderr: exit status 7 (107.451428ms)

                                                
                                                
-- stdout --
	multinode-690000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 21:26:48.124306    7067 out.go:296] Setting OutFile to fd 1 ...
	I1122 21:26:48.124541    7067 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:26:48.124548    7067 out.go:309] Setting ErrFile to fd 2...
	I1122 21:26:48.124552    7067 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:26:48.124741    7067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 21:26:48.124918    7067 out.go:303] Setting JSON to false
	I1122 21:26:48.124940    7067 mustload.go:65] Loading cluster: multinode-690000
	I1122 21:26:48.124989    7067 notify.go:220] Checking for updates...
	I1122 21:26:48.125214    7067 config.go:182] Loaded profile config "multinode-690000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 21:26:48.125228    7067 status.go:255] checking status of multinode-690000 ...
	I1122 21:26:48.125717    7067 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:26:48.175868    7067 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:26:48.175930    7067 status.go:330] multinode-690000 host status = "" (err=state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	)
	I1122 21:26:48.175947    7067 status.go:257] multinode-690000 status: &{Name:multinode-690000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1122 21:26:48.175963    7067 status.go:260] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	E1122 21:26:48.175970    7067 status.go:263] The "multinode-690000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-690000 status --alsologtostderr": multinode-690000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:233: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-690000 status --alsologtostderr": multinode-690000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:237: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-690000 status --alsologtostderr": multinode-690000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-690000
helpers_test.go:235: (dbg) docker inspect multinode-690000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-690000",
	        "Id": "46526e5611db3f5fba8f5c1133ac3b9df5a3328f2905514c3c4c4ab6fa47ee94",
	        "Created": "2023-11-23T05:19:16.043493089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-690000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000: exit status 7 (107.658157ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:26:48.338209    7073 status.go:249] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-690000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopNode (0.53s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-690000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-690000 node start m03 --alsologtostderr: exit status 85 (146.728682ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 21:26:48.447825    7079 out.go:296] Setting OutFile to fd 1 ...
	I1122 21:26:48.448138    7079 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:26:48.448144    7079 out.go:309] Setting ErrFile to fd 2...
	I1122 21:26:48.448148    7079 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:26:48.448334    7079 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 21:26:48.448672    7079 mustload.go:65] Loading cluster: multinode-690000
	I1122 21:26:48.448956    7079 config.go:182] Loaded profile config "multinode-690000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 21:26:48.471221    7079 out.go:177] 
	W1122 21:26:48.492187    7079 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1122 21:26:48.492217    7079 out.go:239] * 
	* 
	W1122 21:26:48.495974    7079 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 21:26:48.516969    7079 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I1122 21:26:48.447825    7079 out.go:296] Setting OutFile to fd 1 ...
I1122 21:26:48.448138    7079 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1122 21:26:48.448144    7079 out.go:309] Setting ErrFile to fd 2...
I1122 21:26:48.448148    7079 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1122 21:26:48.448334    7079 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
I1122 21:26:48.448672    7079 mustload.go:65] Loading cluster: multinode-690000
I1122 21:26:48.448956    7079 config.go:182] Loaded profile config "multinode-690000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1122 21:26:48.471221    7079 out.go:177] 
W1122 21:26:48.492187    7079 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1122 21:26:48.492217    7079 out.go:239] * 
* 
W1122 21:26:48.495974    7079 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1122 21:26:48.516969    7079 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-amd64 -p multinode-690000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-690000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-690000 status: exit status 7 (107.545544ms)

                                                
                                                
-- stdout --
	multinode-690000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:26:48.647580    7081 status.go:260] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	E1122 21:26:48.647595    7081 status.go:263] The "multinode-690000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-690000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-690000
helpers_test.go:235: (dbg) docker inspect multinode-690000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-690000",
	        "Id": "46526e5611db3f5fba8f5c1133ac3b9df5a3328f2905514c3c4c4ab6fa47ee94",
	        "Created": "2023-11-23T05:19:16.043493089Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-690000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000: exit status 7 (106.495767ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:26:48.808395    7087 status.go:249] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-690000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (789.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-690000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-690000
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p multinode-690000: exit status 82 (15.052364424s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-690000"  ...
	* Stopping node "multinode-690000"  ...
	* Stopping node "multinode-690000"  ...
	* Stopping node "multinode-690000"  ...
	* Stopping node "multinode-690000"  ...
	* Stopping node "multinode-690000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-690000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-darwin-amd64 node list -p multinode-690000" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-690000 --wait=true -v=8 --alsologtostderr
E1122 21:27:31.070897    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:27:58.041430    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 21:32:14.201240    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:32:31.152002    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:32:58.122145    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 21:37:31.151265    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:37:41.175405    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 21:37:58.121996    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-690000 --wait=true -v=8 --alsologtostderr: exit status 52 (12m54.065042731s)

                                                
                                                
-- stdout --
	* [multinode-690000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-690000 in cluster multinode-690000
	* Pulling base image ...
	* docker "multinode-690000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "multinode-690000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 21:27:03.973827    7114 out.go:296] Setting OutFile to fd 1 ...
	I1122 21:27:03.974030    7114 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:27:03.974036    7114 out.go:309] Setting ErrFile to fd 2...
	I1122 21:27:03.974040    7114 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:27:03.974216    7114 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 21:27:03.975589    7114 out.go:303] Setting JSON to false
	I1122 21:27:03.997667    7114 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3397,"bootTime":1700713826,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1122 21:27:03.997793    7114 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1122 21:27:04.020186    7114 out.go:177] * [multinode-690000] minikube v1.32.0 on Darwin 14.1.1
	I1122 21:27:04.041847    7114 out.go:177]   - MINIKUBE_LOCATION=17659
	I1122 21:27:04.041951    7114 notify.go:220] Checking for updates...
	I1122 21:27:04.062990    7114 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	I1122 21:27:04.084801    7114 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1122 21:27:04.105855    7114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 21:27:04.126894    7114 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	I1122 21:27:04.148738    7114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 21:27:04.170143    7114 config.go:182] Loaded profile config "multinode-690000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 21:27:04.170266    7114 driver.go:378] Setting default libvirt URI to qemu:///system
	I1122 21:27:04.225394    7114 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.2 (129061)
	I1122 21:27:04.225519    7114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 21:27:04.324180    7114 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2023-11-23 05:27:04.31344612 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=un
confined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:
Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Doc
ker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 21:27:04.366682    7114 out.go:177] * Using the docker driver based on existing profile
	I1122 21:27:04.387809    7114 start.go:298] selected driver: docker
	I1122 21:27:04.387834    7114 start.go:902] validating driver "docker" against &{Name:multinode-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-690000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 21:27:04.387938    7114 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 21:27:04.388129    7114 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 21:27:04.488080    7114 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2023-11-23 05:27:04.478473172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 21:27:04.491175    7114 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 21:27:04.491237    7114 cni.go:84] Creating CNI manager for ""
	I1122 21:27:04.491247    7114 cni.go:136] 1 nodes found, recommending kindnet
	I1122 21:27:04.491256    7114 start_flags.go:323] config:
	{Name:multinode-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-690000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 21:27:04.534660    7114 out.go:177] * Starting control plane node multinode-690000 in cluster multinode-690000
	I1122 21:27:04.555776    7114 cache.go:121] Beginning downloading kic base image for docker with docker
	I1122 21:27:04.577918    7114 out.go:177] * Pulling base image ...
	I1122 21:27:04.619881    7114 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 21:27:04.619956    7114 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1122 21:27:04.619972    7114 cache.go:56] Caching tarball of preloaded images
	I1122 21:27:04.619980    7114 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1122 21:27:04.620186    7114 preload.go:174] Found /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 21:27:04.620212    7114 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1122 21:27:04.620371    7114 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/multinode-690000/config.json ...
	I1122 21:27:04.671717    7114 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1122 21:27:04.671743    7114 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1122 21:27:04.671769    7114 cache.go:194] Successfully downloaded all kic artifacts
	I1122 21:27:04.671818    7114 start.go:365] acquiring machines lock for multinode-690000: {Name:mkea132b81449eba447ff3987e05bb42c1c9416e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 21:27:04.671907    7114 start.go:369] acquired machines lock for "multinode-690000" in 66.145µs
	I1122 21:27:04.671926    7114 start.go:96] Skipping create...Using existing machine configuration
	I1122 21:27:04.671937    7114 fix.go:54] fixHost starting: 
	I1122 21:27:04.672148    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:27:04.721419    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:27:04.721462    7114 fix.go:102] recreateIfNeeded on multinode-690000: state= err=unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:04.721487    7114 fix.go:107] machineExists: false. err=machine does not exist
	I1122 21:27:04.764815    7114 out.go:177] * docker "multinode-690000" container is missing, will recreate.
	I1122 21:27:04.786143    7114 delete.go:124] DEMOLISHING multinode-690000 ...
	I1122 21:27:04.786306    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:27:04.837697    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	W1122 21:27:04.837746    7114 stop.go:75] unable to get state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:04.837771    7114 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:04.838123    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:27:04.887761    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:27:04.887818    7114 delete.go:82] Unable to get host status for multinode-690000, assuming it has already been deleted: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:04.887897    7114 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-690000
	W1122 21:27:04.937708    7114 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-690000 returned with exit code 1
	I1122 21:27:04.937742    7114 kic.go:371] could not find the container multinode-690000 to remove it. will try anyways
	I1122 21:27:04.937813    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:27:04.987398    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	W1122 21:27:04.987461    7114 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:04.987542    7114 cli_runner.go:164] Run: docker exec --privileged -t multinode-690000 /bin/bash -c "sudo init 0"
	W1122 21:27:05.037295    7114 cli_runner.go:211] docker exec --privileged -t multinode-690000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1122 21:27:05.037325    7114 oci.go:650] error shutdown multinode-690000: docker exec --privileged -t multinode-690000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:06.039270    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:27:06.091979    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:27:06.092033    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:06.092044    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:27:06.092080    7114 retry.go:31] will retry after 397.348999ms: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:06.491801    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:27:06.545726    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:27:06.545772    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:06.545782    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:27:06.545806    7114 retry.go:31] will retry after 735.343346ms: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:07.282155    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:27:07.335207    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:27:07.335250    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:07.335258    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:27:07.335283    7114 retry.go:31] will retry after 1.540001033s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:08.875769    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:27:08.933027    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:27:08.933070    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:08.933084    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:27:08.933109    7114 retry.go:31] will retry after 918.963536ms: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:09.852387    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:27:09.904171    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:27:09.904219    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:09.904229    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:27:09.904262    7114 retry.go:31] will retry after 3.759635766s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:13.664417    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:27:13.717596    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:27:13.717657    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:13.717670    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:27:13.717696    7114 retry.go:31] will retry after 2.904856721s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:16.623836    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:27:16.676572    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:27:16.676614    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:16.676629    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:27:16.676652    7114 retry.go:31] will retry after 5.475899145s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:22.153537    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:27:22.206445    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:27:22.206493    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:27:22.206506    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:27:22.206531    7114 oci.go:88] couldn't shut down multinode-690000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	 
	I1122 21:27:22.206599    7114 cli_runner.go:164] Run: docker rm -f -v multinode-690000
	I1122 21:27:22.256933    7114 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-690000
	W1122 21:27:22.306407    7114 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-690000 returned with exit code 1
	I1122 21:27:22.306508    7114 cli_runner.go:164] Run: docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 21:27:22.356400    7114 cli_runner.go:164] Run: docker network rm multinode-690000
	I1122 21:27:22.451135    7114 fix.go:114] Sleeping 1 second for extra luck!
	I1122 21:27:23.452177    7114 start.go:125] createHost starting for "" (driver="docker")
	I1122 21:27:23.474284    7114 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1122 21:27:23.474457    7114 start.go:159] libmachine.API.Create for "multinode-690000" (driver="docker")
	I1122 21:27:23.474512    7114 client.go:168] LocalClient.Create starting
	I1122 21:27:23.474703    7114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 21:27:23.474796    7114 main.go:141] libmachine: Decoding PEM data...
	I1122 21:27:23.474842    7114 main.go:141] libmachine: Parsing certificate...
	I1122 21:27:23.474948    7114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 21:27:23.475032    7114 main.go:141] libmachine: Decoding PEM data...
	I1122 21:27:23.475049    7114 main.go:141] libmachine: Parsing certificate...
	I1122 21:27:23.496445    7114 cli_runner.go:164] Run: docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 21:27:23.547456    7114 cli_runner.go:211] docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 21:27:23.547550    7114 network_create.go:281] running [docker network inspect multinode-690000] to gather additional debugging logs...
	I1122 21:27:23.547574    7114 cli_runner.go:164] Run: docker network inspect multinode-690000
	W1122 21:27:23.597202    7114 cli_runner.go:211] docker network inspect multinode-690000 returned with exit code 1
	I1122 21:27:23.597230    7114 network_create.go:284] error running [docker network inspect multinode-690000]: docker network inspect multinode-690000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-690000 not found
	I1122 21:27:23.597244    7114 network_create.go:286] output of [docker network inspect multinode-690000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-690000 not found
	
	** /stderr **
	I1122 21:27:23.597396    7114 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 21:27:23.649018    7114 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 21:27:23.649397    7114 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00244c9e0}
	I1122 21:27:23.649411    7114 network_create.go:124] attempt to create docker network multinode-690000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1122 21:27:23.649478    7114 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-690000 multinode-690000
	I1122 21:27:23.734388    7114 network_create.go:108] docker network multinode-690000 192.168.58.0/24 created
	I1122 21:27:23.734429    7114 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-690000" container
	I1122 21:27:23.734519    7114 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 21:27:23.784636    7114 cli_runner.go:164] Run: docker volume create multinode-690000 --label name.minikube.sigs.k8s.io=multinode-690000 --label created_by.minikube.sigs.k8s.io=true
	I1122 21:27:23.834559    7114 oci.go:103] Successfully created a docker volume multinode-690000
	I1122 21:27:23.834677    7114 cli_runner.go:164] Run: docker run --rm --name multinode-690000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-690000 --entrypoint /usr/bin/test -v multinode-690000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 21:27:24.128121    7114 oci.go:107] Successfully prepared a docker volume multinode-690000
	I1122 21:27:24.128171    7114 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 21:27:24.128187    7114 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 21:27:24.128279    7114 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-690000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 21:33:23.558523    7114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 21:33:23.559670    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:23.615548    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:23.615670    7114 retry.go:31] will retry after 327.131393ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:23.945231    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:23.997195    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:23.997327    7114 retry.go:31] will retry after 430.865652ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:24.430492    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:24.483728    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:24.483827    7114 retry.go:31] will retry after 541.368344ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:25.026178    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:25.078784    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:33:25.078901    7114 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:33:25.078920    7114 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:25.078981    7114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 21:33:25.079041    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:25.128482    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:25.128578    7114 retry.go:31] will retry after 258.950323ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:25.389472    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:25.443373    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:25.443479    7114 retry.go:31] will retry after 532.936413ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:25.978848    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:26.032781    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:26.032880    7114 retry.go:31] will retry after 581.48157ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:26.614782    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:26.668827    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:33:26.668942    7114 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:33:26.668958    7114 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:26.668969    7114 start.go:128] duration metric: createHost completed in 6m3.136065129s
	I1122 21:33:26.669036    7114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 21:33:26.669093    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:26.718941    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:26.719033    7114 retry.go:31] will retry after 347.403996ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:27.068890    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:27.123759    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:27.123861    7114 retry.go:31] will retry after 531.70316ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:27.657183    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:27.708839    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:27.708930    7114 retry.go:31] will retry after 703.113192ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:28.413556    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:28.466414    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:33:28.466511    7114 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:33:28.466536    7114 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:28.466600    7114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 21:33:28.466665    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:28.518226    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:28.518318    7114 retry.go:31] will retry after 220.42333ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:28.741115    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:28.792616    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:28.792707    7114 retry.go:31] will retry after 217.040179ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:29.012053    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:29.085013    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:29.085101    7114 retry.go:31] will retry after 336.605022ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:29.424118    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:29.477539    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:33:29.477624    7114 retry.go:31] will retry after 674.977339ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:30.154979    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:33:30.207785    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:33:30.207890    7114 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:33:30.207904    7114 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:30.207916    7114 fix.go:56] fixHost completed within 6m25.455333015s
	I1122 21:33:30.207923    7114 start.go:83] releasing machines lock for "multinode-690000", held for 6m25.455359434s
	W1122 21:33:30.207938    7114 start.go:691] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W1122 21:33:30.208006    7114 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I1122 21:33:30.208013    7114 start.go:706] Will try again in 5 seconds ...
	I1122 21:33:35.208232    7114 start.go:365] acquiring machines lock for multinode-690000: {Name:mkea132b81449eba447ff3987e05bb42c1c9416e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 21:33:35.208436    7114 start.go:369] acquired machines lock for "multinode-690000" in 138.419µs
	I1122 21:33:35.208469    7114 start.go:96] Skipping create...Using existing machine configuration
	I1122 21:33:35.208476    7114 fix.go:54] fixHost starting: 
	I1122 21:33:35.208915    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:33:35.264249    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:33:35.264292    7114 fix.go:102] recreateIfNeeded on multinode-690000: state= err=unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:35.264316    7114 fix.go:107] machineExists: false. err=machine does not exist
	I1122 21:33:35.286230    7114 out.go:177] * docker "multinode-690000" container is missing, will recreate.
	I1122 21:33:35.329762    7114 delete.go:124] DEMOLISHING multinode-690000 ...
	I1122 21:33:35.329976    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:33:35.381368    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	W1122 21:33:35.381411    7114 stop.go:75] unable to get state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:35.381429    7114 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:35.381814    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:33:35.430681    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:33:35.430728    7114 delete.go:82] Unable to get host status for multinode-690000, assuming it has already been deleted: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:35.430815    7114 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-690000
	W1122 21:33:35.479917    7114 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-690000 returned with exit code 1
	I1122 21:33:35.479950    7114 kic.go:371] could not find the container multinode-690000 to remove it. will try anyways
	I1122 21:33:35.480040    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:33:35.528693    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	W1122 21:33:35.528747    7114 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:35.528840    7114 cli_runner.go:164] Run: docker exec --privileged -t multinode-690000 /bin/bash -c "sudo init 0"
	W1122 21:33:35.578533    7114 cli_runner.go:211] docker exec --privileged -t multinode-690000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1122 21:33:35.578563    7114 oci.go:650] error shutdown multinode-690000: docker exec --privileged -t multinode-690000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:36.580520    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:33:36.635237    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:33:36.635281    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:36.635296    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:33:36.635320    7114 retry.go:31] will retry after 525.396909ms: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:37.163118    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:33:37.216140    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:33:37.216184    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:37.216194    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:33:37.216220    7114 retry.go:31] will retry after 1.077438678s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:38.296071    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:33:38.350864    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:33:38.350907    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:38.350922    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:33:38.350947    7114 retry.go:31] will retry after 1.264883369s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:39.617290    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:33:39.669254    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:33:39.669298    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:39.669306    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:33:39.669330    7114 retry.go:31] will retry after 2.056360568s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:41.726135    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:33:41.780469    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:33:41.780518    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:41.780526    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:33:41.780548    7114 retry.go:31] will retry after 1.355746599s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:43.138634    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:33:43.191784    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:33:43.191842    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:43.191852    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:33:43.191878    7114 retry.go:31] will retry after 4.233815313s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:47.426096    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:33:47.478113    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:33:47.478158    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:47.478166    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:33:47.478191    7114 retry.go:31] will retry after 3.612375341s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:51.092903    7114 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:33:51.145727    7114 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:33:51.145786    7114 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:33:51.145799    7114 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:33:51.145830    7114 oci.go:88] couldn't shut down multinode-690000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	 
	I1122 21:33:51.145902    7114 cli_runner.go:164] Run: docker rm -f -v multinode-690000
	I1122 21:33:51.196549    7114 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-690000
	W1122 21:33:51.246080    7114 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-690000 returned with exit code 1
	I1122 21:33:51.246190    7114 cli_runner.go:164] Run: docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 21:33:51.296338    7114 cli_runner.go:164] Run: docker network rm multinode-690000
	I1122 21:33:51.398316    7114 fix.go:114] Sleeping 1 second for extra luck!
	I1122 21:33:52.398691    7114 start.go:125] createHost starting for "" (driver="docker")
	I1122 21:33:52.422120    7114 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1122 21:33:52.422285    7114 start.go:159] libmachine.API.Create for "multinode-690000" (driver="docker")
	I1122 21:33:52.422320    7114 client.go:168] LocalClient.Create starting
	I1122 21:33:52.422501    7114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 21:33:52.422572    7114 main.go:141] libmachine: Decoding PEM data...
	I1122 21:33:52.422594    7114 main.go:141] libmachine: Parsing certificate...
	I1122 21:33:52.422654    7114 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 21:33:52.422704    7114 main.go:141] libmachine: Decoding PEM data...
	I1122 21:33:52.422738    7114 main.go:141] libmachine: Parsing certificate...
	I1122 21:33:52.423261    7114 cli_runner.go:164] Run: docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 21:33:52.477618    7114 cli_runner.go:211] docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 21:33:52.477705    7114 network_create.go:281] running [docker network inspect multinode-690000] to gather additional debugging logs...
	I1122 21:33:52.477722    7114 cli_runner.go:164] Run: docker network inspect multinode-690000
	W1122 21:33:52.526873    7114 cli_runner.go:211] docker network inspect multinode-690000 returned with exit code 1
	I1122 21:33:52.526904    7114 network_create.go:284] error running [docker network inspect multinode-690000]: docker network inspect multinode-690000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-690000 not found
	I1122 21:33:52.526915    7114 network_create.go:286] output of [docker network inspect multinode-690000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-690000 not found
	
	** /stderr **
	I1122 21:33:52.527046    7114 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 21:33:52.578534    7114 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 21:33:52.580195    7114 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 21:33:52.580560    7114 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002213d40}
	I1122 21:33:52.580573    7114 network_create.go:124] attempt to create docker network multinode-690000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I1122 21:33:52.580636    7114 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-690000 multinode-690000
	W1122 21:33:52.630572    7114 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-690000 multinode-690000 returned with exit code 1
	W1122 21:33:52.630614    7114 network_create.go:149] failed to create docker network multinode-690000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-690000 multinode-690000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W1122 21:33:52.630631    7114 network_create.go:116] failed to create docker network multinode-690000 192.168.67.0/24, will retry: subnet is taken
	I1122 21:33:52.631999    7114 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 21:33:52.632386    7114 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013c10}
	I1122 21:33:52.632398    7114 network_create.go:124] attempt to create docker network multinode-690000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I1122 21:33:52.632472    7114 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-690000 multinode-690000
	I1122 21:33:52.718390    7114 network_create.go:108] docker network multinode-690000 192.168.76.0/24 created
	I1122 21:33:52.718424    7114 kic.go:121] calculated static IP "192.168.76.2" for the "multinode-690000" container
	I1122 21:33:52.718566    7114 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 21:33:52.769704    7114 cli_runner.go:164] Run: docker volume create multinode-690000 --label name.minikube.sigs.k8s.io=multinode-690000 --label created_by.minikube.sigs.k8s.io=true
	I1122 21:33:52.819375    7114 oci.go:103] Successfully created a docker volume multinode-690000
	I1122 21:33:52.819499    7114 cli_runner.go:164] Run: docker run --rm --name multinode-690000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-690000 --entrypoint /usr/bin/test -v multinode-690000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 21:33:53.161943    7114 oci.go:107] Successfully prepared a docker volume multinode-690000
	I1122 21:33:53.161975    7114 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 21:33:53.161987    7114 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 21:33:53.162107    7114 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-690000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 21:39:52.424656    7114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 21:39:52.424798    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:52.476834    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:39:52.476961    7114 retry.go:31] will retry after 315.708185ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:52.793321    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:52.847709    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:39:52.847820    7114 retry.go:31] will retry after 278.520414ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:53.127713    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:53.180045    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:39:53.180150    7114 retry.go:31] will retry after 453.138444ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:53.633671    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:53.687239    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:39:53.687345    7114 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:39:53.687367    7114 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:53.687432    7114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 21:39:53.687483    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:53.736519    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:39:53.736618    7114 retry.go:31] will retry after 186.760023ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:53.925788    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:53.978306    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:39:53.978426    7114 retry.go:31] will retry after 335.317673ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:54.316086    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:54.368487    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:39:54.368604    7114 retry.go:31] will retry after 551.373821ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:54.920290    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:54.971200    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:39:54.971318    7114 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:39:54.971336    7114 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:54.971345    7114 start.go:128] duration metric: createHost completed in 6m2.572732065s
	I1122 21:39:54.971423    7114 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 21:39:54.971484    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:55.021114    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:39:55.021214    7114 retry.go:31] will retry after 158.076061ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:55.181665    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:55.234536    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:39:55.234634    7114 retry.go:31] will retry after 465.822927ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:55.702848    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:55.756772    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:39:55.756863    7114 retry.go:31] will retry after 655.76559ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:56.413266    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:56.466087    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:39:56.466189    7114 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:39:56.466205    7114 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:56.466255    7114 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 21:39:56.466314    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:56.515339    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:39:56.515428    7114 retry.go:31] will retry after 205.904387ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:56.723726    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:56.776461    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:39:56.776561    7114 retry.go:31] will retry after 449.743185ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:57.228621    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:57.280034    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	I1122 21:39:57.280134    7114 retry.go:31] will retry after 576.27937ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:57.858755    7114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000
	W1122 21:39:57.913042    7114 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000 returned with exit code 1
	W1122 21:39:57.913138    7114 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	W1122 21:39:57.913153    7114 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "multinode-690000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-690000: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:39:57.913161    7114 fix.go:56] fixHost completed within 6m22.704831535s
	I1122 21:39:57.913167    7114 start.go:83] releasing machines lock for "multinode-690000", held for 6m22.704865064s
	W1122 21:39:57.913247    7114 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-690000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	* Failed to start docker container. Running "minikube delete -p multinode-690000" may fix it: recreate: creating host: create host timed out in 360.000000 seconds
	I1122 21:39:57.956363    7114 out.go:177] 
	W1122 21:39:57.978183    7114 out.go:239] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: recreate: creating host: create host timed out in 360.000000 seconds
	W1122 21:39:57.978257    7114 out.go:239] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1122 21:39:57.978360    7114 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1122 21:39:58.000193    7114 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-690000" : exit status 52
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-690000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-690000
helpers_test.go:235: (dbg) docker inspect multinode-690000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-690000",
	        "Id": "8de918f3d314a94d8fa7d015c0710426a2ff50e05ef0e32a21076c52955a180f",
	        "Created": "2023-11-23T05:33:52.679212899Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-690000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000: exit status 7 (108.056879ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:39:58.302438    7601 status.go:249] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-690000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (789.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-690000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-690000 node delete m03: exit status 80 (202.892986ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-690000 node delete m03": exit status 80
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-690000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-690000 status --alsologtostderr: exit status 7 (107.018142ms)

                                                
                                                
-- stdout --
	multinode-690000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 21:39:58.560759    7609 out.go:296] Setting OutFile to fd 1 ...
	I1122 21:39:58.561643    7609 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:39:58.561651    7609 out.go:309] Setting ErrFile to fd 2...
	I1122 21:39:58.561656    7609 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:39:58.561844    7609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 21:39:58.562040    7609 out.go:303] Setting JSON to false
	I1122 21:39:58.562064    7609 mustload.go:65] Loading cluster: multinode-690000
	I1122 21:39:58.562093    7609 notify.go:220] Checking for updates...
	I1122 21:39:58.562389    7609 config.go:182] Loaded profile config "multinode-690000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 21:39:58.562403    7609 status.go:255] checking status of multinode-690000 ...
	I1122 21:39:58.562819    7609 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:39:58.612596    7609 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:39:58.612657    7609 status.go:330] multinode-690000 host status = "" (err=state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	)
	I1122 21:39:58.612676    7609 status.go:257] multinode-690000 status: &{Name:multinode-690000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1122 21:39:58.612694    7609 status.go:260] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	E1122 21:39:58.612701    7609 status.go:263] The "multinode-690000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-690000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-690000
helpers_test.go:235: (dbg) docker inspect multinode-690000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-690000",
	        "Id": "8de918f3d314a94d8fa7d015c0710426a2ff50e05ef0e32a21076c52955a180f",
	        "Created": "2023-11-23T05:33:52.679212899Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-690000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000: exit status 7 (107.246665ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:39:58.773957    7616 status.go:249] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-690000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/DeleteNode (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (14.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-690000 stop
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-690000 stop: exit status 82 (14.429585646s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-690000"  ...
	* Stopping node "multinode-690000"  ...
	* Stopping node "multinode-690000"  ...
	* Stopping node "multinode-690000"  ...
	* Stopping node "multinode-690000"  ...
	* Stopping node "multinode-690000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: docker container inspect multinode-690000 --format=<no value>: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-darwin-amd64 -p multinode-690000 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-690000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-690000 status: exit status 7 (107.71672ms)

                                                
                                                
-- stdout --
	multinode-690000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:40:13.311537    7638 status.go:260] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	E1122 21:40:13.311550    7638 status.go:263] The "multinode-690000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-690000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-690000 status --alsologtostderr: exit status 7 (105.624167ms)

                                                
                                                
-- stdout --
	multinode-690000
	type: Control Plane
	host: Nonexistent
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 21:40:13.365984    7642 out.go:296] Setting OutFile to fd 1 ...
	I1122 21:40:13.366222    7642 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:40:13.366227    7642 out.go:309] Setting ErrFile to fd 2...
	I1122 21:40:13.366232    7642 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:40:13.366418    7642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 21:40:13.366603    7642 out.go:303] Setting JSON to false
	I1122 21:40:13.366626    7642 mustload.go:65] Loading cluster: multinode-690000
	I1122 21:40:13.366659    7642 notify.go:220] Checking for updates...
	I1122 21:40:13.366920    7642 config.go:182] Loaded profile config "multinode-690000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 21:40:13.366933    7642 status.go:255] checking status of multinode-690000 ...
	I1122 21:40:13.367329    7642 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:13.417250    7642 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:40:13.417301    7642 status.go:330] multinode-690000 host status = "" (err=state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	)
	I1122 21:40:13.417320    7642 status.go:257] multinode-690000 status: &{Name:multinode-690000 Host:Nonexistent Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E1122 21:40:13.417335    7642 status.go:260] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	E1122 21:40:13.417341    7642 status.go:263] The "multinode-690000" host does not exist!

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-amd64 -p multinode-690000 status --alsologtostderr": multinode-690000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-690000 status --alsologtostderr": multinode-690000
type: Control Plane
host: Nonexistent
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-690000
helpers_test.go:235: (dbg) docker inspect multinode-690000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-690000",
	        "Id": "8de918f3d314a94d8fa7d015c0710426a2ff50e05ef0e32a21076c52955a180f",
	        "Created": "2023-11-23T05:33:52.679212899Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.76.0/24",
	                    "Gateway": "192.168.76.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-690000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000: exit status 7 (105.713875ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:40:13.576748    7648 status.go:249] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-690000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/StopMultiNode (14.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (156.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-690000 --wait=true -v=8 --alsologtostderr --driver=docker 
E1122 21:42:31.149998    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-690000 --wait=true -v=8 --alsologtostderr --driver=docker : signal: killed (2m36.568907755s)

                                                
                                                
-- stdout --
	* [multinode-690000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-690000 in cluster multinode-690000
	* Pulling base image ...
	* docker "multinode-690000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 21:40:13.687435    7654 out.go:296] Setting OutFile to fd 1 ...
	I1122 21:40:13.687731    7654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:40:13.687738    7654 out.go:309] Setting ErrFile to fd 2...
	I1122 21:40:13.687743    7654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 21:40:13.687923    7654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 21:40:13.689280    7654 out.go:303] Setting JSON to false
	I1122 21:40:13.711962    7654 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4187,"bootTime":1700713826,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1122 21:40:13.712079    7654 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1122 21:40:13.734236    7654 out.go:177] * [multinode-690000] minikube v1.32.0 on Darwin 14.1.1
	I1122 21:40:13.756134    7654 out.go:177]   - MINIKUBE_LOCATION=17659
	I1122 21:40:13.756206    7654 notify.go:220] Checking for updates...
	I1122 21:40:13.799878    7654 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	I1122 21:40:13.841971    7654 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1122 21:40:13.863061    7654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 21:40:13.884009    7654 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	I1122 21:40:13.905048    7654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 21:40:13.926739    7654 config.go:182] Loaded profile config "multinode-690000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 21:40:13.927493    7654 driver.go:378] Setting default libvirt URI to qemu:///system
	I1122 21:40:13.982790    7654 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.2 (129061)
	I1122 21:40:13.982922    7654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 21:40:14.085143    7654 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:108 SystemTime:2023-11-23 05:40:14.074870472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=
unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescriptio
n:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 21:40:14.127651    7654 out.go:177] * Using the docker driver based on existing profile
	I1122 21:40:14.148462    7654 start.go:298] selected driver: docker
	I1122 21:40:14.148487    7654 start.go:902] validating driver "docker" against &{Name:multinode-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-690000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 21:40:14.148616    7654 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 21:40:14.148829    7654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 21:40:14.250183    7654 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:108 SystemTime:2023-11-23 05:40:14.239778907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=
unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescriptio
n:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 21:40:14.253486    7654 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 21:40:14.253560    7654 cni.go:84] Creating CNI manager for ""
	I1122 21:40:14.253574    7654 cni.go:136] 1 nodes found, recommending kindnet
	I1122 21:40:14.253590    7654 start_flags.go:323] config:
	{Name:multinode-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-690000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 21:40:14.296555    7654 out.go:177] * Starting control plane node multinode-690000 in cluster multinode-690000
	I1122 21:40:14.318430    7654 cache.go:121] Beginning downloading kic base image for docker with docker
	I1122 21:40:14.339536    7654 out.go:177] * Pulling base image ...
	I1122 21:40:14.381349    7654 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 21:40:14.381408    7654 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1122 21:40:14.381420    7654 cache.go:56] Caching tarball of preloaded images
	I1122 21:40:14.381426    7654 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1122 21:40:14.381587    7654 preload.go:174] Found /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 21:40:14.381601    7654 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1122 21:40:14.381720    7654 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/multinode-690000/config.json ...
	I1122 21:40:14.431677    7654 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1122 21:40:14.431701    7654 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1122 21:40:14.431720    7654 cache.go:194] Successfully downloaded all kic artifacts
	I1122 21:40:14.431760    7654 start.go:365] acquiring machines lock for multinode-690000: {Name:mkea132b81449eba447ff3987e05bb42c1c9416e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 21:40:14.431848    7654 start.go:369] acquired machines lock for "multinode-690000" in 66.472µs
	I1122 21:40:14.431869    7654 start.go:96] Skipping create...Using existing machine configuration
	I1122 21:40:14.431880    7654 fix.go:54] fixHost starting: 
	I1122 21:40:14.432101    7654 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:14.481658    7654 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:40:14.481714    7654 fix.go:102] recreateIfNeeded on multinode-690000: state= err=unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:14.481734    7654 fix.go:107] machineExists: false. err=machine does not exist
	I1122 21:40:14.525344    7654 out.go:177] * docker "multinode-690000" container is missing, will recreate.
	I1122 21:40:14.547175    7654 delete.go:124] DEMOLISHING multinode-690000 ...
	I1122 21:40:14.547405    7654 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:14.598617    7654 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	W1122 21:40:14.598661    7654 stop.go:75] unable to get state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:14.598684    7654 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:14.599032    7654 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:14.649079    7654 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:40:14.649128    7654 delete.go:82] Unable to get host status for multinode-690000, assuming it has already been deleted: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:14.649206    7654 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-690000
	W1122 21:40:14.698733    7654 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-690000 returned with exit code 1
	I1122 21:40:14.698767    7654 kic.go:371] could not find the container multinode-690000 to remove it. will try anyways
	I1122 21:40:14.698837    7654 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:14.748367    7654 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	W1122 21:40:14.748433    7654 oci.go:84] error getting container status, will try to delete anyways: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:14.748510    7654 cli_runner.go:164] Run: docker exec --privileged -t multinode-690000 /bin/bash -c "sudo init 0"
	W1122 21:40:14.797747    7654 cli_runner.go:211] docker exec --privileged -t multinode-690000 /bin/bash -c "sudo init 0" returned with exit code 1
	I1122 21:40:14.797779    7654 oci.go:650] error shutdown multinode-690000: docker exec --privileged -t multinode-690000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:15.800102    7654 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:15.853936    7654 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:40:15.853983    7654 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:15.853994    7654 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:40:15.854030    7654 retry.go:31] will retry after 582.652992ms: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:16.437632    7654 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:16.489137    7654 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:40:16.489180    7654 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:16.489193    7654 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:40:16.489216    7654 retry.go:31] will retry after 529.969131ms: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:17.021485    7654 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:17.074809    7654 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:40:17.074859    7654 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:17.074870    7654 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:40:17.074897    7654 retry.go:31] will retry after 916.668847ms: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:17.992533    7654 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:18.047935    7654 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:40:18.047978    7654 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:18.047987    7654 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:40:18.048012    7654 retry.go:31] will retry after 2.085796338s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:20.136159    7654 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:20.187540    7654 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:40:20.187588    7654 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:20.187599    7654 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:40:20.187633    7654 retry.go:31] will retry after 2.496690607s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:22.685241    7654 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:22.737211    7654 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:40:22.737255    7654 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:22.737264    7654 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:40:22.737288    7654 retry.go:31] will retry after 2.801077307s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:25.539032    7654 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:25.591673    7654 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:40:25.591732    7654 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:25.591743    7654 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:40:25.591771    7654 retry.go:31] will retry after 5.179720839s: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:30.773846    7654 cli_runner.go:164] Run: docker container inspect multinode-690000 --format={{.State.Status}}
	W1122 21:40:30.830287    7654 cli_runner.go:211] docker container inspect multinode-690000 --format={{.State.Status}} returned with exit code 1
	I1122 21:40:30.830337    7654 oci.go:662] temporary error verifying shutdown: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	I1122 21:40:30.830347    7654 oci.go:664] temporary error: container multinode-690000 status is  but expect it to be exited
	I1122 21:40:30.830373    7654 oci.go:88] couldn't shut down multinode-690000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000
	 
	I1122 21:40:30.830453    7654 cli_runner.go:164] Run: docker rm -f -v multinode-690000
	I1122 21:40:30.881657    7654 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-690000
	W1122 21:40:30.932654    7654 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-690000 returned with exit code 1
	I1122 21:40:30.932770    7654 cli_runner.go:164] Run: docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 21:40:30.982874    7654 cli_runner.go:164] Run: docker network rm multinode-690000
	I1122 21:40:31.090361    7654 fix.go:114] Sleeping 1 second for extra luck!
	I1122 21:40:32.090487    7654 start.go:125] createHost starting for "" (driver="docker")
	I1122 21:40:32.112455    7654 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1122 21:40:32.112620    7654 start.go:159] libmachine.API.Create for "multinode-690000" (driver="docker")
	I1122 21:40:32.112696    7654 client.go:168] LocalClient.Create starting
	I1122 21:40:32.112875    7654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/ca.pem
	I1122 21:40:32.112960    7654 main.go:141] libmachine: Decoding PEM data...
	I1122 21:40:32.112995    7654 main.go:141] libmachine: Parsing certificate...
	I1122 21:40:32.113119    7654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17659-904/.minikube/certs/cert.pem
	I1122 21:40:32.113188    7654 main.go:141] libmachine: Decoding PEM data...
	I1122 21:40:32.113204    7654 main.go:141] libmachine: Parsing certificate...
	I1122 21:40:32.113983    7654 cli_runner.go:164] Run: docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 21:40:32.168978    7654 cli_runner.go:211] docker network inspect multinode-690000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 21:40:32.169057    7654 network_create.go:281] running [docker network inspect multinode-690000] to gather additional debugging logs...
	I1122 21:40:32.169071    7654 cli_runner.go:164] Run: docker network inspect multinode-690000
	W1122 21:40:32.218753    7654 cli_runner.go:211] docker network inspect multinode-690000 returned with exit code 1
	I1122 21:40:32.218779    7654 network_create.go:284] error running [docker network inspect multinode-690000]: docker network inspect multinode-690000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-690000 not found
	I1122 21:40:32.218791    7654 network_create.go:286] output of [docker network inspect multinode-690000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-690000 not found
	
	** /stderr **
	I1122 21:40:32.218919    7654 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 21:40:32.270293    7654 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1122 21:40:32.270669    7654 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0023959b0}
	I1122 21:40:32.270685    7654 network_create.go:124] attempt to create docker network multinode-690000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I1122 21:40:32.270756    7654 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-690000 multinode-690000
	I1122 21:40:32.356050    7654 network_create.go:108] docker network multinode-690000 192.168.58.0/24 created
	I1122 21:40:32.356094    7654 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-690000" container
	I1122 21:40:32.356203    7654 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 21:40:32.407006    7654 cli_runner.go:164] Run: docker volume create multinode-690000 --label name.minikube.sigs.k8s.io=multinode-690000 --label created_by.minikube.sigs.k8s.io=true
	I1122 21:40:32.456726    7654 oci.go:103] Successfully created a docker volume multinode-690000
	I1122 21:40:32.456839    7654 cli_runner.go:164] Run: docker run --rm --name multinode-690000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-690000 --entrypoint /usr/bin/test -v multinode-690000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1122 21:40:32.756702    7654 oci.go:107] Successfully prepared a docker volume multinode-690000
	I1122 21:40:32.756741    7654 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 21:40:32.756754    7654 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 21:40:32.756854    7654 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-690000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-690000 --wait=true -v=8 --alsologtostderr --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-690000
helpers_test.go:235: (dbg) docker inspect multinode-690000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "multinode-690000",
	        "Id": "2df5d04cafa357932aac0bd545e5639b7eac9ffa315f422ef421513f4a9fb7bf",
	        "Created": "2023-11-23T05:40:32.3175579Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "multinode-690000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-690000 -n multinode-690000: exit status 7 (106.77643ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:42:50.365305    7777 status.go:249] status error: host: state: unknown state "multinode-690000": docker container inspect multinode-690000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: multinode-690000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-690000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestMultiNode/serial/RestartMultiNode (156.79s)

                                                
                                    
x
+
TestScheduledStopUnix (300.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-493000 --memory=2048 --driver=docker 
E1122 21:47:31.230188    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:47:58.201505    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 21:48:54.283202    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-493000 --memory=2048 --driver=docker : signal: killed (5m0.004480084s)

                                                
                                                
-- stdout --
	* [scheduled-stop-493000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-493000 in cluster scheduled-stop-493000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
scheduled_stop_test.go:130: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [scheduled-stop-493000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node scheduled-stop-493000 in cluster scheduled-stop-493000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-11-22 21:50:10.005564 -0800 PST m=+4520.086612647
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-493000
helpers_test.go:235: (dbg) docker inspect scheduled-stop-493000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "scheduled-stop-493000",
	        "Id": "47c592d077763cb4210fcb956e764e5108c203a6f8d16e1380f9e66822c6bdf5",
	        "Created": "2023-11-23T05:45:10.996093132Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "scheduled-stop-493000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-493000 -n scheduled-stop-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-493000 -n scheduled-stop-493000: exit status 7 (107.254016ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:50:10.166099    8298 status.go:249] status error: host: state: unknown state "scheduled-stop-493000": docker container inspect scheduled-stop-493000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: scheduled-stop-493000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-493000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "scheduled-stop-493000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-493000
--- FAIL: TestScheduledStopUnix (300.89s)

                                                
                                    
x
+
TestSkaffold (300.89s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe1829499586 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-912000 --memory=2600 --driver=docker 
E1122 21:52:31.230998    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:52:58.201201    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
E1122 21:54:21.258726    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p skaffold-912000 --memory=2600 --driver=docker : signal: killed (4m58.403153797s)

                                                
                                                
-- stdout --
	* [skaffold-912000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-912000 in cluster skaffold-912000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
skaffold_test.go:68: starting minikube: signal: killed

                                                
                                                
-- stdout --
	* [skaffold-912000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node skaffold-912000 in cluster skaffold-912000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2600MB) ...

                                                
                                                
-- /stdout --
panic.go:523: *** TestSkaffold FAILED at 2023-11-22 21:55:10.904628 -0800 PST m=+4820.983935133
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-912000
helpers_test.go:235: (dbg) docker inspect skaffold-912000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "skaffold-912000",
	        "Id": "e321c4cef3463671e3c24804326134bc3d747856b1516ca1b7cf42a446568dc0",
	        "Created": "2023-11-23T05:50:13.626148576Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.58.0/24",
	                    "Gateway": "192.168.58.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "65535"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "skaffold-912000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-912000 -n skaffold-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-912000 -n skaffold-912000: exit status 7 (107.412812ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 21:55:11.066043    8441 status.go:249] status error: host: state: unknown state "skaffold-912000": docker container inspect skaffold-912000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: skaffold-912000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-912000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "skaffold-912000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-912000
--- FAIL: TestSkaffold (300.89s)

                                                
                                    
x
+
TestInsufficientStorage (300.74s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-333000 --memory=2048 --output=json --wait=true --driver=docker 
E1122 21:57:31.232901    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 21:57:58.205011    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-333000 --memory=2048 --output=json --wait=true --driver=docker : signal: killed (5m0.004445598s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e6be15a9-0330-4241-a4d7-3e45347f36f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-333000] minikube v1.32.0 on Darwin 14.1.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a5de261-ffe0-49fe-8bda-af891ac09eaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17659"}}
	{"specversion":"1.0","id":"28865ec2-ee48-4285-abc7-1d14951d5e24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig"}}
	{"specversion":"1.0","id":"cec3e686-2b6f-42ec-82a5-9df3491bb61b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"76449066-4f55-48c3-90cb-828a5b4b12ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2a0816d8-21f2-4712-b9ae-8392629e42ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube"}}
	{"specversion":"1.0","id":"031b356f-01f8-4495-951f-4df96b76c263","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"166b85a4-22df-4d07-87f2-bcd7020263b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5caaa835-2d2e-4684-9389-fcc1d3ce035b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5d6b90aa-bf26-4972-bc6f-9880d2b4c962","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"96e470ad-7943-45f0-a747-a9c27e2ed441","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"481427dc-7074-4de8-84c0-9da0be05b378","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-333000 in cluster insufficient-storage-333000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5444c1bb-16a5-491d-90ff-872739a1904a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"de33ba6d-0223-46e9-93c4-e17474367057","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-333000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-333000 --output=json --layout=cluster: context deadline exceeded (591ns)
status_test.go:87: unmarshalling: unexpected end of JSON input
helpers_test.go:175: Cleaning up "insufficient-storage-333000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-333000
--- FAIL: TestInsufficientStorage (300.74s)

                                                
                                    

Test pass (147/189)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.96
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.28.4/json-events 6.7
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.3
16 TestDownloadOnly/DeleteAll 0.63
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.37
18 TestDownloadOnlyKic 1.99
19 TestBinaryMirror 1.6
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
25 TestAddons/Setup 140.83
29 TestAddons/parallel/InspektorGadget 11.03
30 TestAddons/parallel/MetricsServer 5.94
31 TestAddons/parallel/HelmTiller 10.07
33 TestAddons/parallel/CSI 72.06
34 TestAddons/parallel/Headlamp 14.47
35 TestAddons/parallel/CloudSpanner 5.67
36 TestAddons/parallel/LocalPath 52.37
37 TestAddons/parallel/NvidiaDevicePlugin 5.77
40 TestAddons/serial/GCPAuth/Namespaces 0.1
41 TestAddons/StoppedEnableDisable 11.61
49 TestHyperKitDriverInstallOrUpdate 5.83
52 TestErrorSpam/setup 22.57
53 TestErrorSpam/start 2.24
54 TestErrorSpam/status 1.2
55 TestErrorSpam/pause 1.67
56 TestErrorSpam/unpause 1.8
57 TestErrorSpam/stop 11.43
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 37.37
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 39.43
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.07
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.43
69 TestFunctional/serial/CacheCmd/cache/add_local 1.68
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.99
74 TestFunctional/serial/CacheCmd/cache/delete 0.16
75 TestFunctional/serial/MinikubeKubectlCmd 0.55
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.78
77 TestFunctional/serial/ExtraConfig 40.17
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 3.13
80 TestFunctional/serial/LogsFileCmd 3.33
81 TestFunctional/serial/InvalidService 5.74
83 TestFunctional/parallel/ConfigCmd 0.53
84 TestFunctional/parallel/DashboardCmd 13.17
85 TestFunctional/parallel/DryRun 1.97
86 TestFunctional/parallel/InternationalLanguage 0.84
87 TestFunctional/parallel/StatusCmd 1.19
92 TestFunctional/parallel/AddonsCmd 0.27
93 TestFunctional/parallel/PersistentVolumeClaim 26.64
95 TestFunctional/parallel/SSHCmd 0.78
96 TestFunctional/parallel/CpCmd 1.86
97 TestFunctional/parallel/MySQL 35.43
98 TestFunctional/parallel/FileSync 0.42
99 TestFunctional/parallel/CertSync 2.69
103 TestFunctional/parallel/NodeLabels 0.05
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
107 TestFunctional/parallel/License 0.43
108 TestFunctional/parallel/Version/short 0.11
109 TestFunctional/parallel/Version/components 0.87
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.45
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
114 TestFunctional/parallel/ImageCommands/ImageBuild 2.84
115 TestFunctional/parallel/ImageCommands/Setup 2.39
116 TestFunctional/parallel/DockerEnv/bash 2.19
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.44
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.35
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.33
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.76
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.29
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.98
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.82
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.66
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.76
127 TestFunctional/parallel/ServiceCmd/DeployApp 15.16
128 TestFunctional/parallel/ServiceCmd/List 0.43
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
130 TestFunctional/parallel/ServiceCmd/HTTPS 15
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.2
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
142 TestFunctional/parallel/ServiceCmd/Format 15
143 TestFunctional/parallel/ServiceCmd/URL 15
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
145 TestFunctional/parallel/ProfileCmd/profile_list 0.48
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
147 TestFunctional/parallel/MountCmd/any-port 8.95
148 TestFunctional/parallel/MountCmd/specific-port 2.42
149 TestFunctional/parallel/MountCmd/VerifyCleanup 3.72
150 TestFunctional/delete_addon-resizer_images 0.14
151 TestFunctional/delete_my-image_image 0.05
152 TestFunctional/delete_minikube_cached_images 0.05
156 TestImageBuild/serial/Setup 21.66
157 TestImageBuild/serial/NormalBuild 1.66
158 TestImageBuild/serial/BuildWithBuildArg 0.94
159 TestImageBuild/serial/BuildWithDockerIgnore 0.79
160 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.73
170 TestJSONOutput/start/Command 35.75
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.59
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.62
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 10.82
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.76
195 TestKicCustomNetwork/create_custom_network 24.08
196 TestKicCustomNetwork/use_default_bridge_network 23.8
197 TestKicExistingNetwork 24.56
198 TestKicCustomSubnet 23.3
199 TestKicStaticIP 24.12
200 TestMainNoArgs 0.08
201 TestMinikubeProfile 50.33
204 TestMountStart/serial/StartWithMountFirst 7.29
205 TestMountStart/serial/VerifyMountFirst 0.39
206 TestMountStart/serial/StartWithMountSecond 7.5
207 TestMountStart/serial/VerifyMountSecond 0.38
208 TestMountStart/serial/DeleteFirst 2.11
209 TestMountStart/serial/VerifyMountPostDelete 0.38
210 TestMountStart/serial/Stop 1.57
211 TestMountStart/serial/RestartStopped 8.4
230 TestPreload 138.67
251 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 6.28
252 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 8.33
x
+
TestDownloadOnly/v1.16.0/json-events (7.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-001000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-001000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (7.964616961s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-001000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-001000: exit status 85 (290.091767ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-001000 | jenkins | v1.32.0 | 22 Nov 23 20:34 PST |          |
	|         | -p download-only-001000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/22 20:34:49
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 20:34:49.768512    1487 out.go:296] Setting OutFile to fd 1 ...
	I1122 20:34:49.768799    1487 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:34:49.768804    1487 out.go:309] Setting ErrFile to fd 2...
	I1122 20:34:49.768809    1487 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:34:49.768979    1487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	W1122 20:34:49.769077    1487 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17659-904/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17659-904/.minikube/config/config.json: no such file or directory
	I1122 20:34:49.770783    1487 out.go:303] Setting JSON to true
	I1122 20:34:49.794584    1487 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":263,"bootTime":1700713826,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1122 20:34:49.794702    1487 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1122 20:34:49.819966    1487 out.go:97] [download-only-001000] minikube v1.32.0 on Darwin 14.1.1
	I1122 20:34:49.841893    1487 out.go:169] MINIKUBE_LOCATION=17659
	I1122 20:34:49.820209    1487 notify.go:220] Checking for updates...
	W1122 20:34:49.820235    1487 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball: no such file or directory
	I1122 20:34:49.886718    1487 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	I1122 20:34:49.907897    1487 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1122 20:34:49.950959    1487 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 20:34:49.994003    1487 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	W1122 20:34:50.036936    1487 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1122 20:34:50.037292    1487 driver.go:378] Setting default libvirt URI to qemu:///system
	I1122 20:34:50.096951    1487 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.2 (129061)
	I1122 20:34:50.097077    1487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 20:34:50.204803    1487 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:false NGoroutines:51 SystemTime:2023-11-23 04:34:50.192350544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 20:34:50.225873    1487 out.go:97] Using the docker driver based on user configuration
	I1122 20:34:50.225911    1487 start.go:298] selected driver: docker
	I1122 20:34:50.225921    1487 start.go:902] validating driver "docker" against <nil>
	I1122 20:34:50.226088    1487 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 20:34:50.330960    1487 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:false NGoroutines:51 SystemTime:2023-11-23 04:34:50.320019869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 20:34:50.331126    1487 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1122 20:34:50.336080    1487 start_flags.go:394] Using suggested 5882MB memory alloc based on sys=32768MB, container=5930MB
	I1122 20:34:50.336243    1487 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1122 20:34:50.357620    1487 out.go:169] Using Docker Desktop driver with root privileges
	I1122 20:34:50.378773    1487 cni.go:84] Creating CNI manager for ""
	I1122 20:34:50.378815    1487 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1122 20:34:50.378833    1487 start_flags.go:323] config:
	{Name:download-only-001000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:5882 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-001000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 20:34:50.400993    1487 out.go:97] Starting control plane node download-only-001000 in cluster download-only-001000
	I1122 20:34:50.401060    1487 cache.go:121] Beginning downloading kic base image for docker with docker
	I1122 20:34:50.422703    1487 out.go:97] Pulling base image ...
	I1122 20:34:50.422811    1487 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1122 20:34:50.422903    1487 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1122 20:34:50.474007    1487 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1122 20:34:50.474231    1487 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1122 20:34:50.474373    1487 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1122 20:34:50.477255    1487 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1122 20:34:50.477285    1487 cache.go:56] Caching tarball of preloaded images
	I1122 20:34:50.477435    1487 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1122 20:34:50.499886    1487 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1122 20:34:50.499916    1487 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1122 20:34:50.590748    1487 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1122 20:34:53.118179    1487 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1122 20:34:53.118347    1487 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1122 20:34:53.670079    1487 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1122 20:34:53.670308    1487 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/download-only-001000/config.json ...
	I1122 20:34:53.670331    1487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/download-only-001000/config.json: {Name:mkf15630244e4f1df5d501b6be3b0612d847c4d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 20:34:53.670622    1487 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1122 20:34:53.670906    1487 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17659-904/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-001000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (6.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-001000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-001000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker : (6.701279742s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (6.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-001000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-001000: exit status 85 (298.524637ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-001000 | jenkins | v1.32.0 | 22 Nov 23 20:34 PST |          |
	|         | -p download-only-001000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-001000 | jenkins | v1.32.0 | 22 Nov 23 20:34 PST |          |
	|         | -p download-only-001000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/22 20:34:58
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.4 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 20:34:58.027090    1523 out.go:296] Setting OutFile to fd 1 ...
	I1122 20:34:58.027385    1523 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:34:58.027391    1523 out.go:309] Setting ErrFile to fd 2...
	I1122 20:34:58.027396    1523 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:34:58.027571    1523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	W1122 20:34:58.027665    1523 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17659-904/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17659-904/.minikube/config/config.json: no such file or directory
	I1122 20:34:58.028904    1523 out.go:303] Setting JSON to true
	I1122 20:34:58.051265    1523 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":272,"bootTime":1700713826,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1122 20:34:58.051382    1523 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1122 20:34:58.073209    1523 out.go:97] [download-only-001000] minikube v1.32.0 on Darwin 14.1.1
	I1122 20:34:58.095221    1523 out.go:169] MINIKUBE_LOCATION=17659
	I1122 20:34:58.073435    1523 notify.go:220] Checking for updates...
	I1122 20:34:58.138913    1523 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	I1122 20:34:58.160107    1523 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1122 20:34:58.181327    1523 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 20:34:58.202798    1523 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	W1122 20:34:58.245988    1523 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1122 20:34:58.246763    1523 config.go:182] Loaded profile config "download-only-001000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1122 20:34:58.246841    1523 start.go:810] api.Load failed for download-only-001000: filestore "download-only-001000": Docker machine "download-only-001000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1122 20:34:58.246999    1523 driver.go:378] Setting default libvirt URI to qemu:///system
	W1122 20:34:58.247042    1523 start.go:810] api.Load failed for download-only-001000: filestore "download-only-001000": Docker machine "download-only-001000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1122 20:34:58.303253    1523 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.2 (129061)
	I1122 20:34:58.303359    1523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 20:34:58.407634    1523 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:false NGoroutines:51 SystemTime:2023-11-23 04:34:58.398108432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 20:34:58.428849    1523 out.go:97] Using the docker driver based on existing profile
	I1122 20:34:58.428868    1523 start.go:298] selected driver: docker
	I1122 20:34:58.428874    1523 start.go:902] validating driver "docker" against &{Name:download-only-001000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:5882 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-001000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 20:34:58.429049    1523 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 20:34:58.534210    1523 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:false NGoroutines:51 SystemTime:2023-11-23 04:34:58.52112086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:6 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=un
confined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:
Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Doc
ker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 20:34:58.537300    1523 cni.go:84] Creating CNI manager for ""
	I1122 20:34:58.537324    1523 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1122 20:34:58.537341    1523 start_flags.go:323] config:
	{Name:download-only-001000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:5882 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-001000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 20:34:58.559021    1523 out.go:97] Starting control plane node download-only-001000 in cluster download-only-001000
	I1122 20:34:58.559064    1523 cache.go:121] Beginning downloading kic base image for docker with docker
	I1122 20:34:58.580823    1523 out.go:97] Pulling base image ...
	I1122 20:34:58.580903    1523 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 20:34:58.580988    1523 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1122 20:34:58.632872    1523 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1122 20:34:58.633050    1523 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1122 20:34:58.633071    1523 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1122 20:34:58.633078    1523 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1122 20:34:58.633086    1523 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1122 20:34:58.638890    1523 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1122 20:34:58.638901    1523 cache.go:56] Caching tarball of preloaded images
	I1122 20:34:58.639042    1523 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 20:34:58.659544    1523 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1122 20:34:58.659554    1523 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1122 20:34:58.737772    1523 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1122 20:35:02.393814    1523 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1122 20:35:02.393995    1523 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17659-904/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1122 20:35:03.014307    1523 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1122 20:35:03.014385    1523 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/download-only-001000/config.json ...
	I1122 20:35:03.014785    1523 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1122 20:35:03.015122    1523 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17659-904/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-001000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.63s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-001000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.99s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-327000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-327000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-327000
--- PASS: TestDownloadOnlyKic (1.99s)

                                                
                                    
x
+
TestBinaryMirror (1.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-400000 --alsologtostderr --binary-mirror http://127.0.0.1:49329 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-400000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-400000
--- PASS: TestBinaryMirror (1.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-853000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-853000: exit status 85 (187.356391ms)

                                                
                                                
-- stdout --
	* Profile "addons-853000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-853000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-853000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-853000: exit status 85 (207.24682ms)

                                                
                                                
-- stdout --
	* Profile "addons-853000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-853000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (140.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-853000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-853000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m20.828607897s)
--- PASS: TestAddons/Setup (140.83s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.03s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mk9d8" [6b4bdfa0-027d-4d84-9582-22b734ac4658] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.027708657s
addons_test.go:840: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-853000
addons_test.go:840: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-853000: (6.003229436s)
--- PASS: TestAddons/parallel/InspektorGadget (11.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 4.145017ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-jzwpl" [691ec973-ccae-4682-be6c-f1643dea200d] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014940351s
addons_test.go:414: (dbg) Run:  kubectl --context addons-853000 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-darwin-amd64 -p addons-853000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.94s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.07s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.323449ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-9vpww" [2b999fa6-b79e-452d-8d17-d2ded6dec99b] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.014886149s
addons_test.go:472: (dbg) Run:  kubectl --context addons-853000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-853000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.303100954s)
addons_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p addons-853000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.07s)

                                                
                                    
x
+
TestAddons/parallel/CSI (72.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 16.244855ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-853000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-853000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0313df4b-123b-4585-bf9c-fdb3d870de6b] Pending
helpers_test.go:344: "task-pv-pod" [0313df4b-123b-4585-bf9c-fdb3d870de6b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0313df4b-123b-4585-bf9c-fdb3d870de6b] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.014265933s
addons_test.go:583: (dbg) Run:  kubectl --context addons-853000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-853000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-853000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-853000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-853000 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-853000 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-853000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-853000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [835730ff-55ab-4a65-b7e8-18372dbfd880] Pending
helpers_test.go:344: "task-pv-pod-restore" [835730ff-55ab-4a65-b7e8-18372dbfd880] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [835730ff-55ab-4a65-b7e8-18372dbfd880] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.018744291s
addons_test.go:625: (dbg) Run:  kubectl --context addons-853000 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-853000 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-853000 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-darwin-amd64 -p addons-853000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-darwin-amd64 -p addons-853000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.760850243s)
addons_test.go:641: (dbg) Run:  out/minikube-darwin-amd64 -p addons-853000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:641: (dbg) Done: out/minikube-darwin-amd64 -p addons-853000 addons disable volumesnapshots --alsologtostderr -v=1: (1.087040715s)
--- PASS: TestAddons/parallel/CSI (72.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-853000 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-853000 --alsologtostderr -v=1: (1.454089246s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-2th2q" [456dd263-652f-4525-80ad-76160cca4afb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-2th2q" [456dd263-652f-4525-80ad-76160cca4afb] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.011663261s
--- PASS: TestAddons/parallel/Headlamp (14.47s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-l64w2" [0ddf9ca0-0240-462e-ba60-7b174f54fdc5] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014088922s
addons_test.go:859: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-853000
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.37s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-853000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-853000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-853000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [61b9b46a-633d-4d95-a32f-91b4f14c23d7] Pending
helpers_test.go:344: "test-local-path" [61b9b46a-633d-4d95-a32f-91b4f14c23d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [61b9b46a-633d-4d95-a32f-91b4f14c23d7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [61b9b46a-633d-4d95-a32f-91b4f14c23d7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.011032309s
addons_test.go:890: (dbg) Run:  kubectl --context addons-853000 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-darwin-amd64 -p addons-853000 ssh "cat /opt/local-path-provisioner/pvc-3f28a733-23a7-41b8-b2f3-b07c13772952_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-853000 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-853000 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-darwin-amd64 -p addons-853000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-darwin-amd64 -p addons-853000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.292811774s)
--- PASS: TestAddons/parallel/LocalPath (52.37s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.77s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vtz8b" [4eb846c9-117f-4d52-ac5b-1804cbb2de93] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.018687024s
addons_test.go:954: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-853000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.77s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-853000 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-853000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.61s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-853000
addons_test.go:171: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-853000: (10.894764955s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-853000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-853000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-853000
--- PASS: TestAddons/StoppedEnableDisable (11.61s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (5.83s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (5.83s)

                                                
                                    
x
+
TestErrorSpam/setup (22.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-851000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-851000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 --driver=docker : (22.567261834s)
--- PASS: TestErrorSpam/setup (22.57s)

                                                
                                    
x
+
TestErrorSpam/start (2.24s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 start --dry-run
--- PASS: TestErrorSpam/start (2.24s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 pause
--- PASS: TestErrorSpam/pause (1.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (11.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 stop: (10.827016536s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-851000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-851000 stop
--- PASS: TestErrorSpam/stop (11.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17659-904/.minikube/files/etc/test/nested/copy/1485/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-679000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-679000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (37.366011084s)
--- PASS: TestFunctional/serial/StartWithProxy (37.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-679000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-679000 --alsologtostderr -v=8: (39.426501939s)
functional_test.go:659: soft start took 39.427335387s for "functional-679000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-679000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 cache add registry.k8s.io/pause:3.1: (1.21466925s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 cache add registry.k8s.io/pause:3.3: (1.141424668s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 cache add registry.k8s.io/pause:latest: (1.077797235s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-679000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2074792161/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 cache add minikube-local-cache-test:functional-679000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 cache add minikube-local-cache-test:functional-679000: (1.092481684s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 cache delete minikube-local-cache-test:functional-679000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-679000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (385.887109ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 kubectl -- --context functional-679000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-679000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.78s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-679000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1122 20:42:30.974166    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:42:30.980943    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:42:30.991433    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:42:31.013600    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:42:31.054320    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:42:31.134612    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:42:31.295515    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:42:31.615978    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:42:32.256112    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:42:33.536268    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
E1122 20:42:36.096651    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-679000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.16979787s)
functional_test.go:757: restart took 40.169947677s for "functional-679000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-679000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 logs
E1122 20:42:41.218829    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 logs: (3.127842669s)
--- PASS: TestFunctional/serial/LogsCmd (3.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3342312547/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd3342312547/001/logs.txt: (3.330489753s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-679000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-679000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-679000: exit status 115 (583.895698ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31287 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-679000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-679000 delete -f testdata/invalidsvc.yaml: (1.916845637s)
--- PASS: TestFunctional/serial/InvalidService (5.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 config get cpus: exit status 14 (66.004484ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 config get cpus: exit status 14 (63.255816ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-679000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-679000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3751: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-679000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-679000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (1.011228643s)

                                                
                                                
-- stdout --
	* [functional-679000] minikube v1.32.0 on Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 20:44:20.023188    3631 out.go:296] Setting OutFile to fd 1 ...
	I1122 20:44:20.023559    3631 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:44:20.023568    3631 out.go:309] Setting ErrFile to fd 2...
	I1122 20:44:20.023574    3631 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:44:20.023831    3631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 20:44:20.026611    3631 out.go:303] Setting JSON to false
	I1122 20:44:20.057564    3631 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":834,"bootTime":1700713826,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1122 20:44:20.057690    3631 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1122 20:44:20.080749    3631 out.go:177] * [functional-679000] minikube v1.32.0 on Darwin 14.1.1
	I1122 20:44:20.144486    3631 out.go:177]   - MINIKUBE_LOCATION=17659
	I1122 20:44:20.123309    3631 notify.go:220] Checking for updates...
	I1122 20:44:20.186185    3631 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	I1122 20:44:20.228346    3631 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1122 20:44:20.286485    3631 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 20:44:20.329279    3631 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	I1122 20:44:20.350389    3631 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 20:44:20.371724    3631 config.go:182] Loaded profile config "functional-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 20:44:20.372361    3631 driver.go:378] Setting default libvirt URI to qemu:///system
	I1122 20:44:20.483937    3631 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.2 (129061)
	I1122 20:44:20.484088    3631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 20:44:20.656879    3631 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:59 SystemTime:2023-11-23 04:44:20.615051838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 20:44:20.755210    3631 out.go:177] * Using the docker driver based on existing profile
	I1122 20:44:20.829341    3631 start.go:298] selected driver: docker
	I1122 20:44:20.829372    3631 start.go:902] validating driver "docker" against &{Name:functional-679000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-679000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 20:44:20.829470    3631 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 20:44:20.854221    3631 out.go:177] 
	W1122 20:44:20.875399    3631 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1122 20:44:20.896395    3631 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-679000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-679000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-679000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (839.898594ms)

                                                
                                                
-- stdout --
	* [functional-679000] minikube v1.32.0 sur Darwin 14.1.1
	  - MINIKUBE_LOCATION=17659
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 20:44:21.966968    3711 out.go:296] Setting OutFile to fd 1 ...
	I1122 20:44:21.967208    3711 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:44:21.967214    3711 out.go:309] Setting ErrFile to fd 2...
	I1122 20:44:21.967218    3711 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1122 20:44:21.967424    3711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
	I1122 20:44:21.969233    3711 out.go:303] Setting JSON to false
	I1122 20:44:21.991607    3711 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":835,"bootTime":1700713826,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.1","kernelVersion":"23.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1122 20:44:21.991704    3711 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1122 20:44:22.013483    3711 out.go:177] * [functional-679000] minikube v1.32.0 sur Darwin 14.1.1
	I1122 20:44:22.093386    3711 out.go:177]   - MINIKUBE_LOCATION=17659
	I1122 20:44:22.071917    3711 notify.go:220] Checking for updates...
	I1122 20:44:22.135583    3711 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
	I1122 20:44:22.193746    3711 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1122 20:44:22.252639    3711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 20:44:22.296751    3711 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube
	I1122 20:44:22.359646    3711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 20:44:22.381044    3711 config.go:182] Loaded profile config "functional-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1122 20:44:22.381845    3711 driver.go:378] Setting default libvirt URI to qemu:///system
	I1122 20:44:22.442649    3711 docker.go:122] docker version: linux-24.0.6:Docker Desktop 4.25.2 (129061)
	I1122 20:44:22.442779    3711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 20:44:22.564102    3711 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:59 SystemTime:2023-11-23 04:44:22.551097393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6218719232 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:linuxkit-160d99154625 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=u
nconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.23.0-desktop.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription
:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.9] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.0.9]] Warnings:<nil>}}
	I1122 20:44:22.586100    3711 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1122 20:44:22.628742    3711 start.go:298] selected driver: docker
	I1122 20:44:22.628759    3711 start.go:902] validating driver "docker" against &{Name:functional-679000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-679000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 20:44:22.628861    3711 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 20:44:22.652866    3711 out.go:177] 
	W1122 20:44:22.673869    3711 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1122 20:44:22.716031    3711 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0beeee78-d205-49e3-8fc1-0542dabd13c4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011720047s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-679000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-679000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-679000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-679000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [676fa0a3-07da-4ada-8f9f-785f5e8bf6e7] Pending
helpers_test.go:344: "sp-pod" [676fa0a3-07da-4ada-8f9f-785f5e8bf6e7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1122 20:43:52.898285    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [676fa0a3-07da-4ada-8f9f-785f5e8bf6e7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.012980268s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-679000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-679000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-679000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fbd9f996-fa12-4128-9ceb-e2ce607d0f17] Pending
helpers_test.go:344: "sp-pod" [fbd9f996-fa12-4128-9ceb-e2ce607d0f17] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fbd9f996-fa12-4128-9ceb-e2ce607d0f17] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010026199s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-679000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh -n functional-679000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 cp functional-679000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd946495841/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh -n functional-679000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-679000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-s6g9j" [eb6a3dda-5211-4c3d-ac26-413037f0ea8e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-s6g9j" [eb6a3dda-5211-4c3d-ac26-413037f0ea8e] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.069492301s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-679000 exec mysql-859648c796-s6g9j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-679000 exec mysql-859648c796-s6g9j -- mysql -ppassword -e "show databases;": exit status 1 (130.737183ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-679000 exec mysql-859648c796-s6g9j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-679000 exec mysql-859648c796-s6g9j -- mysql -ppassword -e "show databases;": exit status 1 (128.050636ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-679000 exec mysql-859648c796-s6g9j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-679000 exec mysql-859648c796-s6g9j -- mysql -ppassword -e "show databases;": exit status 1 (124.095878ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-679000 exec mysql-859648c796-s6g9j -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1485/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "sudo cat /etc/test/nested/copy/1485/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1485.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "sudo cat /etc/ssl/certs/1485.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1485.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "sudo cat /usr/share/ca-certificates/1485.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "sudo cat /etc/ssl/certs/14852.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "sudo cat /usr/share/ca-certificates/14852.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-679000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "sudo systemctl is-active crio"
E1122 20:42:51.458982    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 ssh "sudo systemctl is-active crio": exit status 1 (547.410796ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 version -o=json --components
2023/11/22 20:44:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/components (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-679000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-679000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-679000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-679000 image ls --format short --alsologtostderr:
I1122 20:44:36.346468    3987 out.go:296] Setting OutFile to fd 1 ...
I1122 20:44:36.346765    3987 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1122 20:44:36.346773    3987 out.go:309] Setting ErrFile to fd 2...
I1122 20:44:36.346778    3987 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1122 20:44:36.346981    3987 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
I1122 20:44:36.347718    3987 config.go:182] Loaded profile config "functional-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1122 20:44:36.347847    3987 config.go:182] Loaded profile config "functional-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1122 20:44:36.348323    3987 cli_runner.go:164] Run: docker container inspect functional-679000 --format={{.State.Status}}
I1122 20:44:36.441800    3987 ssh_runner.go:195] Run: systemctl --version
I1122 20:44:36.441880    3987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679000
I1122 20:44:36.496038    3987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49959 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/functional-679000/id_rsa Username:docker}
I1122 20:44:36.584449    3987 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-679000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/google-containers/addon-resizer      | functional-679000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| docker.io/library/nginx                     | alpine            | b135667c98980 | 47.7MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/library/nginx                     | latest            | a6bd71f48f683 | 187MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/library/mysql                     | 5.7               | bdba757bc9336 | 501MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-679000 | 7b3450c8621b8 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-679000 image ls --format table --alsologtostderr:
I1122 20:44:37.162983    4021 out.go:296] Setting OutFile to fd 1 ...
I1122 20:44:37.163353    4021 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1122 20:44:37.163359    4021 out.go:309] Setting ErrFile to fd 2...
I1122 20:44:37.163364    4021 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1122 20:44:37.163588    4021 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
I1122 20:44:37.164348    4021 config.go:182] Loaded profile config "functional-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1122 20:44:37.164449    4021 config.go:182] Loaded profile config "functional-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1122 20:44:37.164996    4021 cli_runner.go:164] Run: docker container inspect functional-679000 --format={{.State.Status}}
I1122 20:44:37.221707    4021 ssh_runner.go:195] Run: systemctl --version
I1122 20:44:37.221798    4021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679000
I1122 20:44:37.276151    4021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49959 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/functional-679000/id_rsa Username:docker}
I1122 20:44:37.365868    4021 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-679000 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-679000"],"size":"32900000"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd
48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"7b3450c8621b8a88682d067389a81af10b0feb36e127956f4b45bc397f3f93de","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-679000"],"size":"30"},{"id":"b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47700000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"
repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"5
01000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-679000 image ls --format json --alsologtostderr:
I1122 20:44:36.847647    4008 out.go:296] Setting OutFile to fd 1 ...
I1122 20:44:36.847981    4008 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1122 20:44:36.847989    4008 out.go:309] Setting ErrFile to fd 2...
I1122 20:44:36.847993    4008 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1122 20:44:36.848209    4008 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
I1122 20:44:36.848839    4008 config.go:182] Loaded profile config "functional-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1122 20:44:36.848931    4008 config.go:182] Loaded profile config "functional-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1122 20:44:36.849389    4008 cli_runner.go:164] Run: docker container inspect functional-679000 --format={{.State.Status}}
I1122 20:44:36.906561    4008 ssh_runner.go:195] Run: systemctl --version
I1122 20:44:36.906647    4008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679000
I1122 20:44:36.962504    4008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49959 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/functional-679000/id_rsa Username:docker}
I1122 20:44:37.048078    4008 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-679000 image ls --format yaml --alsologtostderr:
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-679000
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47700000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 7b3450c8621b8a88682d067389a81af10b0feb36e127956f4b45bc397f3f93de
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-679000
size: "30"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-679000 image ls --format yaml --alsologtostderr:
I1122 20:44:36.523426    3994 out.go:296] Setting OutFile to fd 1 ...
I1122 20:44:36.523731    3994 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1122 20:44:36.523737    3994 out.go:309] Setting ErrFile to fd 2...
I1122 20:44:36.523741    3994 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1122 20:44:36.523959    3994 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
I1122 20:44:36.524600    3994 config.go:182] Loaded profile config "functional-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1122 20:44:36.524696    3994 config.go:182] Loaded profile config "functional-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1122 20:44:36.525173    3994 cli_runner.go:164] Run: docker container inspect functional-679000 --format={{.State.Status}}
I1122 20:44:36.583913    3994 ssh_runner.go:195] Run: systemctl --version
I1122 20:44:36.583990    3994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679000
I1122 20:44:36.641600    3994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49959 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/functional-679000/id_rsa Username:docker}
I1122 20:44:36.731077    3994 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 ssh pgrep buildkitd: exit status 1 (384.623088ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image build -t localhost/my-image:functional-679000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 image build -t localhost/my-image:functional-679000 testdata/build --alsologtostderr: (2.156929651s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-679000 image build -t localhost/my-image:functional-679000 testdata/build --alsologtostderr:
I1122 20:44:37.078621    4018 out.go:296] Setting OutFile to fd 1 ...
I1122 20:44:37.099057    4018 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1122 20:44:37.099068    4018 out.go:309] Setting ErrFile to fd 2...
I1122 20:44:37.099072    4018 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1122 20:44:37.099276    4018 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17659-904/.minikube/bin
I1122 20:44:37.099928    4018 config.go:182] Loaded profile config "functional-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1122 20:44:37.100559    4018 config.go:182] Loaded profile config "functional-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1122 20:44:37.101139    4018 cli_runner.go:164] Run: docker container inspect functional-679000 --format={{.State.Status}}
I1122 20:44:37.160642    4018 ssh_runner.go:195] Run: systemctl --version
I1122 20:44:37.160755    4018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-679000
I1122 20:44:37.218618    4018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49959 SSHKeyPath:/Users/jenkins/minikube-integration/17659-904/.minikube/machines/functional-679000/id_rsa Username:docker}
I1122 20:44:37.306643    4018 build_images.go:151] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.504674223.tar
I1122 20:44:37.306717    4018 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1122 20:44:37.316078    4018 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.504674223.tar
I1122 20:44:37.320348    4018 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.504674223.tar: stat -c "%s %y" /var/lib/minikube/build/build.504674223.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.504674223.tar': No such file or directory
I1122 20:44:37.320380    4018 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.504674223.tar --> /var/lib/minikube/build/build.504674223.tar (3072 bytes)
I1122 20:44:37.343221    4018 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.504674223
I1122 20:44:37.353401    4018 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.504674223 -xf /var/lib/minikube/build/build.504674223.tar
I1122 20:44:37.403445    4018 docker.go:346] Building image: /var/lib/minikube/build/build.504674223
I1122 20:44:37.403677    4018 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-679000 /var/lib/minikube/build/build.504674223
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.8s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:7b4ce0bdf28774695a4c11e40b7ca469f058cca6ed0777e4c961f469f38ea28a done
#8 naming to localhost/my-image:functional-679000 done
#8 DONE 0.0s
I1122 20:44:39.131685    4018 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-679000 /var/lib/minikube/build/build.504674223: (1.72799024s)
I1122 20:44:39.131757    4018 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.504674223
I1122 20:44:39.141069    4018 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.504674223.tar
I1122 20:44:39.149830    4018 build_images.go:207] Built localhost/my-image:functional-679000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.504674223.tar
I1122 20:44:39.149859    4018 build_images.go:123] succeeded building to: functional-679000
I1122 20:44:39.149864    4018 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.299863377s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-679000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-679000 docker-env) && out/minikube-darwin-amd64 status -p functional-679000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-679000 docker-env) && out/minikube-darwin-amd64 status -p functional-679000": (1.306294562s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-679000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image load --daemon gcr.io/google-containers/addon-resizer:functional-679000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 image load --daemon gcr.io/google-containers/addon-resizer:functional-679000 --alsologtostderr: (4.126619021s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image load --daemon gcr.io/google-containers/addon-resizer:functional-679000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 image load --daemon gcr.io/google-containers/addon-resizer:functional-679000 --alsologtostderr: (2.344750638s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.121225862s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-679000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image load --daemon gcr.io/google-containers/addon-resizer:functional-679000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 image load --daemon gcr.io/google-containers/addon-resizer:functional-679000 --alsologtostderr: (4.713563633s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image save gcr.io/google-containers/addon-resizer:functional-679000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 image save gcr.io/google-containers/addon-resizer:functional-679000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.979567939s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image rm gcr.io/google-containers/addon-resizer:functional-679000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
E1122 20:43:11.938829    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/addons-853000/client.crt: no such file or directory
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.331274494s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-679000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 image save --daemon gcr.io/google-containers/addon-resizer:functional-679000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-679000 image save --daemon gcr.io/google-containers/addon-resizer:functional-679000 --alsologtostderr: (1.639810577s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-679000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-679000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-679000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-b7jbz" [3bceaf29-d0c1-4473-8e3e-56eebe546e91] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-b7jbz" [3bceaf29-d0c1-4473-8e3e-56eebe546e91] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.017534257s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 service list -o json
functional_test.go:1493: Took "426.344655ms" to run "out/minikube-darwin-amd64 -p functional-679000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 service --namespace=default --https --url hello-node: signal: killed (15.001783128s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50201

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:50201
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-679000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-679000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-679000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-679000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3458: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-679000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-679000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [38f137fd-a12b-4b41-890f-3b34f1157383] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [38f137fd-a12b-4b41-890f-3b34f1157383] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.013232782s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-679000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-679000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3488: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 service hello-node --url --format={{.IP}}: signal: killed (15.002284883s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 service hello-node --url: signal: killed (15.002607117s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50274

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:50274
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "396.349875ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "79.389072ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "399.169052ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "78.22458ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-679000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3611651945/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1700714659763923000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3611651945/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1700714659763923000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3611651945/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1700714659763923000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3611651945/001/test-1700714659763923000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (522.030419ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 04:44 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 04:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 04:44 test-1700714659763923000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh cat /mount-9p/test-1700714659763923000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-679000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3908b605-49a8-49b3-af3c-35b6fbe218cf] Pending
helpers_test.go:344: "busybox-mount" [3908b605-49a8-49b3-af3c-35b6fbe218cf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3908b605-49a8-49b3-af3c-35b6fbe218cf] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3908b605-49a8-49b3-af3c-35b6fbe218cf] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.013085447s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-679000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-679000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port3611651945/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-679000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1637690304/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (445.7817ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-679000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1637690304/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 ssh "sudo umount -f /mount-9p": exit status 1 (405.707759ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-679000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-679000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1637690304/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-679000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1222063663/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-679000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1222063663/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-679000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1222063663/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T" /mount1: exit status 1 (599.967838ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T" /mount1: exit status 1 (905.607471ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-679000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-679000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-679000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1222063663/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-679000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1222063663/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-679000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1222063663/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.72s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-679000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-679000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-679000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-723000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-723000 --driver=docker : (21.65811053s)
--- PASS: TestImageBuild/serial/Setup (21.66s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-723000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-723000: (1.661048125s)
--- PASS: TestImageBuild/serial/NormalBuild (1.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-723000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-723000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.79s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-723000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (35.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-286000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E1122 20:53:25.620577    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-286000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (35.749155965s)
--- PASS: TestJSONOutput/start/Command (35.75s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-286000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-286000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-286000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-286000 --output=json --user=testUser: (10.815073895s)
--- PASS: TestJSONOutput/stop/Command (10.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-025000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-025000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (385.804342ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"76f06065-5064-4551-9be0-9c50ef3b9c93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-025000] minikube v1.32.0 on Darwin 14.1.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ceb6503-3148-4015-a87e-de5697eba6ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17659"}}
	{"specversion":"1.0","id":"96563976-2880-4dab-99cc-614bdcd70fdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig"}}
	{"specversion":"1.0","id":"a43d5581-ae9d-428f-bc63-fcc14164bf5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"2ea34232-6fe8-485f-832a-25a4daae41b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1dd095a4-cbcb-45c9-baf6-be4208ca1d9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17659-904/.minikube"}}
	{"specversion":"1.0","id":"2a4c807c-0336-437c-8a53-664df6378fc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a1779980-daf8-4392-9507-3f7fb6d2bc89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-025000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-025000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-272000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-272000 --network=: (21.647370081s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-272000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-272000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-272000: (2.373664559s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.8s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-525000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-525000 --network=bridge: (21.494206882s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-525000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-525000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-525000: (2.253657701s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.80s)

                                                
                                    
x
+
TestKicExistingNetwork (24.56s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-831000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-831000 --network=existing-network: (21.986883571s)
helpers_test.go:175: Cleaning up "existing-network-831000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-831000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-831000: (2.229416632s)
--- PASS: TestKicExistingNetwork (24.56s)

                                                
                                    
x
+
TestKicCustomSubnet (23.3s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-930000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-930000 --subnet=192.168.60.0/24: (20.850932553s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-930000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-930000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-930000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-930000: (2.396163602s)
--- PASS: TestKicCustomSubnet (23.30s)

                                                
                                    
x
+
TestKicStaticIP (24.12s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-748000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-748000 --static-ip=192.168.200.200: (21.45541021s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-748000 ip
helpers_test.go:175: Cleaning up "static-ip-748000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-748000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-748000: (2.433920823s)
--- PASS: TestKicStaticIP (24.12s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (50.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-909000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-909000 --driver=docker : (21.65336664s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-911000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-911000 --driver=docker : (22.116507268s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-909000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-911000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-911000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-911000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-911000: (2.460078093s)
helpers_test.go:175: Cleaning up "first-909000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-909000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-909000: (2.415551555s)
--- PASS: TestMinikubeProfile (50.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-315000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-315000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.284104254s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-315000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-326000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-326000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.501228589s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-326000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.11s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-315000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-315000 --alsologtostderr -v=5: (2.108624242s)
--- PASS: TestMountStart/serial/DeleteFirst (2.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-326000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-326000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-326000: (1.565110667s)
--- PASS: TestMountStart/serial/Stop (1.57s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-326000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-326000: (7.399823247s)
--- PASS: TestMountStart/serial/RestartStopped (8.40s)

                                                
                                    
x
+
TestPreload (138.67s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-308000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E1122 21:42:58.121507    1485 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17659-904/.minikube/profiles/functional-679000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-308000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m18.758385739s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-308000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-308000 image pull gcr.io/k8s-minikube/busybox: (1.321716984s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-308000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-308000: (10.77030722s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-308000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-308000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (45.081101737s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-308000 image list
helpers_test.go:175: Cleaning up "test-preload-308000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-308000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-308000: (2.448481366s)
--- PASS: TestPreload (138.67s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (6.28s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17659
- KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2079466501/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2079466501/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2079466501/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2079466501/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (6.28s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (8.33s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17659
- KUBECONFIG=/Users/jenkins/minikube-integration/17659-904/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current918673220/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current918673220/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current918673220/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current918673220/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (8.33s)

                                                
                                    

Test skip (17/189)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 14.873875ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-khn8c" [832736c6-12f8-4d7c-8c68-f0bfee21c660] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014991787s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xkwvt" [f7a5af43-5873-43bf-96cc-4fba540f31c4] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013753588s
addons_test.go:339: (dbg) Run:  kubectl --context addons-853000 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-853000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-853000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.750763227s)
addons_test.go:354: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (13.85s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-853000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-853000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-853000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8a7f068f-5fff-45f3-86fe-29e00dd14b79] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8a7f068f-5fff-45f3-86fe-29e00dd14b79] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.014896147s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p addons-853000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:281: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.33s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-679000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-679000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-4ms5m" [f2bdcad3-6fc5-4351-b761-00a86f1af955] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-4ms5m" [f2bdcad3-6fc5-4351-b761-00a86f1af955] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.012418302s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
Copied to clipboard